Robert E. Hampson, Ph.D.
As a young graduate student, I was fascinated with the idea that someday we, as scientists, would figure out a way to read signals in the brain with enough resolution to be able to operate computer interfaces directly with our brains. The computer interface helmet in James P. Hogan's The Genesis Machine and the immersive virtual reality in Real-time Interrupt were just the type of brain-to-machine interface that inspired me to enter the field of neurophysiology. Much more recently, Mary Lou Jepsen, former CEO at Intel, Google[X] and more (https://www.opnwatr.io/about-us), suggests that one of her patents may even go a step further and provide machine-assisted telepathy in the form of sensors embedded into a wearable hat. Unfortunately, the biggest hurdle in brain-to-machine or brain-to-computer interfacing (commonly referred to as BCI) is the ability to pick up signals from deep inside the brain. EEG signals from the brain surface are easy. Memory signals from hippocampus, and deeper are much harder to separate from background. What we need is some good way to look inside the brain and be able to decipher the activity that corresponds to specific thoughts and intentions.
Looking inside: X-Ray
Since the discovery of X-Rays in 1895 by William Roentgen, scientists have had the ability to look inside the human body and see the skeleton. High-energy photons coming from an X-Ray source or emitter (usually a variation of a cathode ray tube) turn photographic film dark, or activate the pixels of a charge-capture device (CCD) camera. The calcium and phosphorus of bone blocks and/or deflects the X-Rays, allowing bone to appear as a white "shadow" on the otherwise black photographic image. Thus the familiar white bone on black background image that most people commonly associate with X-Ray is a negative image, and the primary details of bone and joints are literally "holes" in the image!
X-Ray, however, is a flat, 2-Dimensional image: Photons travel in a straight line from emitter to detector (or film). The incorporation of rotating emitters and detectors as proposed by William Oldendorf in 1961 allows Computed Tomography (CT—originally termed Computerized Axial Tomography, or CAT scan) to compile a series of 2-D images into a 3-Dimensional representation of the body. With CT, physicians can effectively look inside the body and easily find broken bones, kidney and gall bladder stones, calcium deposits, or foreign bodies. What they can't see is any detail of muscles, blood vessels, intestines . . . or brain.
Visualizing "Soft Tissue"
For many of these "soft tissue" systems, it is possible to add a high-density contrast agent (such as barium) that allows visualization of blood vessels or the lining of the stomach and intestines; however, there is a limit to how this technique can be applied to the brain, since visualizing brain function requires a different approach than simply looking at physical features! One way to map brain function is simply to provide a chemical (e.g. a dye) that will be taken up by active brain cells—thus, the more active the brain area, the more dye will be present. It's the same principal as a contrast agent for visualizing the circulatory or digestive systems, but instead of structure, it reveals which brain regions are active.
The most common "dye" based scanning agents for brain are radioactive isotopes that can be detected using fluoroscopy or Positron Emission Tomography (PET). Like CT, PET rotates detectors around the person being scanned and builds a 3-D image of where the isotope builds up in the body. Unlike X-Ray techniques, isotopes produce their own photons or positrons (anti-electrons), so no emitter device is required, just detectors. To image active brain areas, PET often utilizes "deoxyglucose," a form of glucose that is taken up by brain cells, and broken into component molecules that remain in the cells for hours. The molecule is tagged with an isotope such as fluorine-18 (18F) which emits positrons for just a few hours, and decays overnight.
PET scans are often used in cancer diagnosis, since cancerous cells utilize much more glucose than normal cells; brain cells consume even more glucose, yet even among brain areas there are detectable differences. For example, when a subject is reading, the visual and language areas "light up" due to increased blood flow and glucose utilization. In this manner, doctors and scientists can gain an understanding of how brain areas are involved in different brain functions, but it still does not help with understanding the content or contributions of those activities.
Mapping Structure and Function: Magnetic Resonance Imaging (MRI)
The current "state of the art" in brain imaging utilizes a technique originally developed for organic chemistry to characterize and identify chemical composition. As an undergraduate in chemistry, Nuclear Magnetic Resonance was a technique I used to count hydrogen atoms in organic molecules. A powerful magnetic field is used to align the position and direction of spin of the hydrogen atoms, then a radio signal causes the hydrogen to "energize" and "flip" its spin to the opposite direction. When the spin "relaxes" to the original direction, energy is released, and can be detected in a manner similar to the imaging techniques above. The addition of tomography (i.e. the 3-D arrangement of detectors and computer reconstruction) and tuning the system to the resonance frequencies of the hydrogen in water molecules gave us MRI and a whole new way to visualize the soft, water-containing tissues of the body.
Within the brain, the ability to see the non-bony structures has revolutionized the fields of neurology and neurosurgery. Now doctors could see the "holes" left by strokes, or the damage due to concussion. Epilepsy patients could be scanned for the presence of abnormal structures that might cause seizures, and surgeons had new tools to precisely target their operations. Furthermore, additional tunings of the MRI resonant frequencies has provided such unique tools as Diffusion Tensor Imaging which allows tracing the projections of brain cells to assist not only in understand the connected networks of the brain, but also the types of damage produced by concussion and traumatic brain injury. Tuning the MRI to the resonant frequency of oxygen allows a much more precise measurement of blood flow and metabolic activity than PET scans, and provides the first great imaging tool for understanding the content and of information within the brain!
Functional MRI (fMRI) is a tool that allows near real-time identification of which brain regions (and cells) have high blood flow and consuming oxygen. fMRI maps the Blood-Oxygen Level Dependent (BOLD) signal which indicates highly active clusters of brain cells in volumes of less than one cubic millimeter. While this volume is still larger than the size of individual neurons (~0.005 mm3) and does not have the precision of electrophysiology recording probes (~0.01 mm3), this resolution is more than sufficient to identify different timing and intensity of activity in various brain structures and decipher networks of connected brain areas. Furthermore, fMRI allows a look at the whole brain's activity in small time intervals; electrical recordings to date can identify only a thousand or so neurons (hence a volume of only a few mm3) at a time.
Can fMRI be used to "read" the brain? Dr. Jack Gallant at the University of California at Berkeley thinks so! In a study from 2011 (http://www.bbc.com/future/story/20140717-i-can-read-your-mind), Gallant and his team showed various pictures to subjects while in the MRI scanner and recorded the fMRI signals associated with each picture. He then had the subjects "daydream" and imagine a scene using those pictures, and matched the new fMRI signals to the previously recorded signals. While the reconstruction was not up to "movie" standards, the team was certainly able to detect a roughly timeline of imagined images from the brain signals alone! (http://www.sciencedirect.com/science/article/pii/S0960982211009377)
Reading the Brain
So, is fMRI the solution to being able to "read" brains and create computer interfaces? Not exactly.
In the first place, MRI machines are big! It is common practice to build hospital and research facilities to fit the MRI machines, and not build machines to fit the facilities. Second, the magnets required to create the fields are intense, and no magnetic materials are allowed in the facility. Finally, despite the successes with fMRI, the process is rather slow. It can be used in "real-time" to track activity changes, but the resolution of those changes is in tens of seconds to minutes. From my own research using direct electrical recording of neurons, we know that the actual "codes" associated with memory last only milliseconds.
With respect to the portability issue, Mary Lou Jepson has a solution (and a patent) for a "wearable MRI" that uses infrared light sensors sewn into a knit cap (https://www.cnbc.com/2017/07/07/this-inventor-is-developing-technology-that-could-enable-telepathy.html). While at least one claim—that the device utilizes the same blood flow signals as MRI—is true, the Open Water device is not MRI at all, but a technology known as functional Near-Infrared Spectroscopy (fNIRS). fNIRS takes advantage of the fact that the skull and most brain tissue are essentially transparent to infrared light. IR emitters on the skull can project through the scalp, bone and brain tissue, with the signal blocked primarily by the iron-hemoglobin of the blood. fNIRS tracks blood flow similar to fMRI, and it can respond faster (0.1-1 sec) than fMRI. However, the sensors are limited in coverage, and to provide the same 3-D imaging as fMRI for a single brain would require more fNIRS sensors than currently exist in the U.S. Still, the technology is a step in the right direction, although it's applicability as a machine-based "telepathy" is still quite a few years further in the future!
Another promising technology is Magnetoencephalography (MEG). Encephalography is the science of recording brain signals, and its more familiar cousin, electroencephalography (EEG), is well known in neuroscience, neurology and neurosurgery. EEG records signals from all over the brain, and its biggest drawback for developing interfaces is that it is difficult to precisely identify where a signal comes from without actually implanting an electrode next to the area of interest. While this is commonly done for disease diagnosis and treatment, it is not a desirable technique for a person wanting to control their home automation!
MEG on the other hand is "noninvasive" and does not require wires or electrodes. Rather, it records the minute magnetic fields produced by the electrical nerve impulses, and can localize those signals to volumes of less than 0.10 mm3 within the brain. Like MRI, however, it currently requires room-sized devices and liquid helium cooling for the detectors! Still, medical device technology is building more sensitive, less "delicate" detectors, and MEG may very well become the alternative to electrophysiology with wire electrodes within the next 10 years.
For purely scientific purposes in the laboratory of being able to identify single brain cells and map connections within the brain, there is a relatively recent technique known as "Clarity" which renders the brain (after death) perfectly transparent and visible to researchers. A series of treatments causes the proteins and pigments in brain tissue to lose the ability to block or absorb light, while at the same time preserving its structure and all of the connections. Special dyes can be introduced prior to the procedure to ensure that particular clusters of cells will be fluorescent, or visible in particular wavelengths of light. Thus, only normal cells (or even abnormal cells) will show up in an otherwise transparent brain. Moreover, the dyes can be introduced to only the cells that are connected to each other, allowing specific clusters and networks of brain cells to be visible in the Clarity preparation. This new tool is essential to researchers' understanding of how the brain forms its functional networks, and while it is not a tool for reading the intact human brain, the knowledge gained from Clarity can be added to the information from fMRI, MEG and fNIRS to better decipher the information content recorded using those techniques.
The Brain is not a Computer, How Can We Interface It?
A recent publication by Psychologist Robert Epstein created quite a stir among folks who know my research by stating that the brain is not a computer, that it does not "represent" or "process" information, and that there is actually no such thing as "memory" (https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer). While at least one part of Dr. Epstein's premise is accurate—the brain is not computerlike—the rest of his statements are true only within the very limited context of his effort to encourage readers to not use computer analogies for brain function. My own response is that Dr. Epstein clearly does not know (or listen to) many neurophysiologists. The article used the analogy that nowhere in the brain is there a "picture" of a one-dollar bill, and subjects asked to draw from memory were unable to do so. Shortcomings of that experiment were that a confounding variable is the ability of the subject to draw from memory (which is highly variable—hence why fine artwork is prized) and that the experiment tested no one with eidetic of "photographic" memory. In science, an experiment with more than one variable out of control (in this case, drawing ability) is invalid, and the conclusions suspect. In contrast, I present the work of many hippocampal physiologists for the past 50+ years that have shown that neurons in hippocampus represent position and orientation in space (http://www.memoryspace.mvm.ed.ac.uk/memoryandplacecells.html). For many years, despite profoundly consistent "Place Cell" results, it seemed counter-factual, since there certainly could not be any "Cartesian mapping system" in the brain . . . until 2005, when researchers in the laboratory of Edvard & May-Britt Moser discovered "grid cells" in the Entorhinal Cortex (http://www.scholarpedia.org/article/Grid_cells), a brain region which provides input to the "Place Cells" of the hippocampus! These cells fire as an animal moves through the environment, and they represent a hexagonal "grid" that serves as the basic mapping unit on which Place Cell firing is organized.
With respect to memory, my own research has identified neurons which are active in response to particular combinations of the information within a behavioral memory task. Those neurons not only represent the information within the task, they can be electrically activated to influence memory behavior (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3141091/). If the brain did not represent and process information, it would not be possible to construct brain-machine interfaces, and yet scientists have demonstrated brain-control of a robotic arm (http://www.reuters.com/article/us-science-prosthetics-mindcontrol/mind-controlled-robotic-arm-has-skill-and-speed-of-human-limb-idUSBRE8BG01H20121217), simple controls for video games (http://neurosky.com/) and little furry ears that twitch when the wearer thinks happy thoughts (http://neurowear.com/projects_detail/necomimi.html)!
The secret to effective BCI is a better understanding of how brain cells work within specific contexts such as memory, faster, more precise tools for imaging the ongoing transformation of that information (such as the signals that represent information content within brain networks, and faster, more precise tools for imaging that content. Brain imaging appears to be following an exponential function of improvement, and the current techniques show great promise for "wearable" interfaces in the next 5-10 years.
It probably will not be telepathy as proposed in science fiction and fantasy alike. However, when the interface is good enough that an essay such as this can be dictated by thought and not typing, it will be the next best thing!
Copyright © 2017 Robert E. Hampson, Ph.D.
Dr. Robert E. Hampson is a neuroscientist with a keen interest in learning, memory and teaching brain science. His current research involves information encoding for memory, as well as developing systems to repair memory function in patients with head injuries, diseases and disorders such as stroke, and Alzheimer's Disease. He is also known to Baen readers and SF convention audiences by his penname "Tedd Roberts."