A new project has created a “brain-computer interface” that can translate brainwaves into text.
While there are a few caveats to the research and there's still plenty of fine-tuning to do, the system can currently translate neural activity into text with error rates as low as 3 percent, not far off a professional-level speech transcription. In the not too distant future, the “mind-reading” technology could be developed to help people who are unable to speak, such as those with locked-in syndrome.
Reported in the journal Nature Neuroscience, researchers from the University of California, San Francisco trained a group of artificial neural networks (a set of algorithms modeled on human brain function) to associate neural signals with text sentences.
The researchers gathered four people with pre-existing intracranial electrodes, electrodes inside the skull that monitor seizure electrical activity in the brain, and had them read sentences aloud while their neural activity was recorded. The network then used this information to learn what neural activity is likely to be related to features of speech such as vowels or consonants. It also looked to understand what types of words were likely to follow on from one another and how sentences are structured.
After some training, it was able to deliver some fairly reliable transcriptions of neural activity, with an error rate of just 3 percent. Although it can currently only handle single sentences within a set 250-word vocabulary, that low error rate is impressive considering that professional-level speech transcription is around 5 percent.
There were some blunders, however. When the participant said aloud the sentence “Those musicians harmonize marvelously,” the machine thought the neural activity meant “The spinach was a famous singer.” The sentence "Those thieves stole thirty jewels" was misinterpreted as "which theatre shows mother goose," and "The woman is holding a broom" was confused for the slightly sinister "the little is giggling giggling".
The system currently relies on participants to read aloud the sentences, which hampers its ability to help disabled people with speech difficulties. The researchers hope to develop their research by ironing out the errors, as well as expanding its ability to equate neural activity with the structures of language. However, for the time being, the researchers say it will be some time before the technology is razor-sharp and able to function as a viable speech prosthesis.
"Although we should like the decoder to learn and exploit the regularities of the language, it remains to show how many data would be required to expand from our tiny languages to a more general form of English," the researchers concluded.