If the past is any guide, the thrilling future of
neuroscience has already arrived, but most of us just haven’t noticed it yet.
With previous scientific breakthroughs that elevated the human condition—such as the discovery that bacteria cause infectious disease (leading to antiseptics and antibiotics) and the discovery that silicon integrated circuits could be made inexpensively (fueling the digital revolution)—key discoveries emerged decades before anyone, let alone leading scientists, grasped their full importance.
Ignaz Semmelweis discovered that “cadaverous particles” (bacteria) caused disease in 1848, over 20 years before antiseptic techniques to combat infection were adopted. The integrated circuit and Complimentary Metal on Silicon (CMOS) developments in 1958 and 1963, respectively, occurred long before these discoveries made possible Moore’s Law (digital circuit performance doubles every 18 months), personal computers, mobile phones and the World Wide Web.
I believe that developments comparable to previous seminal scientific breakthroughs have already occurred in neuroscience, but most of the world hasn’t realized it yet for a number of reasons, chief among them that some of these earthshaking advances aren’t actually in neuroscience at all, but in fields such as Computational Mathematics and
Artificial Intelligence(AI).
Big neuroscience advances
Before describing the “non-neuroscience” advances that are propelling neuroscience into an exciting future, let me focus on recent key breakthroughs that
are in the field of neuroscience. Ian Stevensen and Konrad Kording of Northwestern University showed in a 2011 paper in
Nature Neuroscience, that neuroscientists have doubled the number of individual neurons that can be simultaneously recorded every seven years since 1950, producing a “Moore’s law” of neuroscience that has taken us from studying one neuron at a time, to nearly a thousand neurons at a time. Techniques such as opto-genetic recording, carbon nanotube electrode arrays, and injectable silver mesh arrays of nano-electrodes now enable neuroscientists to both listen to and stimulate vast arrays of neuron populations, all at the same time.
Apart from the therapeutic value of such massive recording and stimulation capabilities for treating diseases such as epilepsy and Parkinson’s, the ability to read and write to large populations of neurons has opened up the possibility of directly interfacing brains to computers.
Miguel Nicolelis of Duke University has already done this in monkeys, using an array of implanted electrodes in motor cortex to read and interpret neuronal discharges in large populations of neurons so that a monkey, with Nicholelis’s implants can precisely control an artificial robotic arm by “thinking” alone.
This staggering achievement illustrates the crucial importance of “non-neuroscience” advances to neuroscience itself: Nicolelis could not have made sense of the overwhelming flood of data pouring out of his monkey’s brains without the assistance high performance computers and Machine Learning (a kind of Artificial
Intelligence) algorithms that learned to interpret complex firing patterns of the large populations of neurons that “learned” to control a robot arm.
Put another way, neuroscientists have gotten so good at studying the brain with micro recording techniques, that neuroscientists, by themselves, cannot hope to understand the overwhelming complexity of what they have discovered.
Enter Computational Math and AI
Fortunately for neuroscience, mathematicians, data scientists, and computer scientists have been wrestling with their own “information overload” challenges, coping with exponential increases in the volume, variety and velocity of digital data spawned by the Moore’s law revolution in digital technology.
Google, for instance, ingests unimaginable volumes of data every second, that they must somehow “monetize” (make money from, because their services are largely “free”) by precisely targeting digital advertisements to people who use Google search or Gmail. Google can only do this with the aid of massive cloud computing systems running complex math and AI algorithms that quickly recognize patterns (such as which people who search for item “A” are likely to purchase item “B”) and act upon these insights to serve up ads in real time. (When you enter a Google search you get back different ads in the sidebars than I do, because your likely purchasing behavior is different than mine.)
This sort of digital
mind-reading through esoteric pattern analysis is precisely the sort of thing that neuroscientists such as Nicolelis need both to interpret patterns of activity in large populations of neurons and to communicate with those same neurons.
One branch of AI, called “
cognitive computing,” holds particular promise for extending Nicolelis work to humans (enabling humans to control robots with their minds or to help paralysis victims walk again with aid of robotic prosthetics). Cognitive computing, which goes well beyond simple pattern recognition, achieves deep
understanding about the underlying
causes of complex patterns, instead of simple recognition that patterns exist.
Whereas current machine learning AI systems can determine, from brute force statistics, which people who make Google searches will respond to different targeted ad promotions, these AI systems don’t know why there is a connection between Google search behavior and purchasing behavior, only that a useful correlation exists.
But using esoteric mathematics, such as the higher dimensional models described by Princeton University researchers Li, Bastian, Welsh and Rabitz in a 2015 article in the Journal of Physical Chemistry, cognitive computing experts are gaining deep insights into causal relationships among large numbers of variables (such as firing patterns of thousands of different neurons) in order to understand the deep meaning of complicated patterns.
The future is now
Armed with such AI-derived deep understanding, it will soon be possible for computers to communicate in real time with the brain in highly sophisticated ways, enabling such science fiction-sounding capabilities already demonstrated in the lab (see references) such as:
- Direct brain-to- brain communication through thought alone.
- Brain-to-brain transfer of learning and memory.
- Hybrid AI/Brain combinations (AI assisted brains) learning.
- Thought-controlled computers and machines.
- Mind controlled prosthetics for spinal cord patients and amputees.
While my last post,
Where Will Evolution Take Humans Next, predicted that humans themselves—through
gene editing of human embryos (e.g. through CRISPR/Cas9)—will directly determine the directions that Homo sapiens will soon evolve, here I predict yet another—far faster—form of human evolution: the blurring of the boundary between humans and the machines that humans invent.
That this
marriage of humans and machines will occur, I have no doubt. The bigger question is, after it
does happen, will human beings be more or less than they once were?
Only the future will tell.