It’s not quite telepathy, but a group of scientists have successfully eavesdropped on our inner thoughts for the first time. Using a newly designed algorithm, researchers were able to work out what people were saying in their heads based on brain activity. The idea behind this is not to give people Charles Xavier-style X-men powers, but to eventually use such a system to help individuals who can’t speak, for example due to paralysis, communicate with others. The work has been published in Frontiers in Neuroengineering.
When someone talks to you, the sound waves they produce switch on a specific set of nerve cells, or neurons, located in your inner ear. These then relay this sensory information to parts of the brain which interpret the sound as words. But does speaking out loud and saying words in your head, for example when reading silently, activate the same neurons in the brain? This is the question that a group of University of California, Berkeley, researchers was keen to answer.
To find out more, they examined the brain activity of seven individuals undergoing epilepsy surgery. Using a technique called electrocorticography, which involves measuring neuronal activity via electrodes placed on the surface of the brain, the team took recordings while the patients either read out loud or performed a silent reading task. Both of these tasks involved the subjects reading short pieces of text that scrolled across a video screen. The team also included a control situation in which recordings were taken while the participants weren’t doing anything.
During the overt (reading aloud) task, the researchers mapped which neurons became activated during specific aspects of speech, and used this to construct a decoder for each participant. After working out which firing patterns corresponded to particular words, they set their decoder to work on the participants’ brain activity during silent reading. Remarkably, they found it was able to translate words that several of the volunteers were thinking, using only their neuronal firing patterns.
The researchers are also using their decoder to predict what music a person is listening to by playing particular songs to the volunteers, and once again looking at the neuronal firing patterns during different aspects of the music.
“Sound is sound,” lead author Brian Pasley told New Scientist. “It all helps us understand different aspects of how the brain processes it.”
While the preliminary results are certainly encouraging, the algorithms aren’t accurate enough to build a device for patients with medical conditions that are unable to speak. They are therefore now hoping to improve it by looking at brain activity during different pronunciations of words and different speeds of speech.
“Ultimately,” says Pasley, “if we understand covert speech well enough, we’ll be able to create a medical prosthesis that could help someone who is paralyzed, or locked in and can’t speak.”
[Via New Scientist, Science Alert and Frontiers in Neuroengineering]