“Mind reading” may be about to become a reality – and in the most literal sense possible, as a new breakthrough from researchers at the University of Technology Sydney’s GrapheneX-UTS Human-centric Artificial Intelligence Centre sees thoughts transformed into words on a screen.
“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Ching-Ten Lin, Distinguished Professor at the UTS School of Computer Science and Director of the GrapheneX-UTS HAI Centre.
“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding,” Lin, who led the research, explained. “The integration with large language models is also opening new frontiers in neuroscience and AI.”
In a study that has been selected as a spotlight paper at the NeurIPS conference, an annual meeting of researchers in artificial intelligence and machine learning, participants silently read passages of text while an AI model called DeWave – using only their brainwaves as input – projected those words onto a screen.
While it’s not the first technology to be able to translate brain signals into language, it’s the only one so far to require neither brain implants nor access to a full-on MRI machine. It also has an edge on predecessors that require additional input such as eye-tracking software, the researchers say, as the new technology can be used with or without such extras.
Instead, users need only to wear a cap that records their brain activity via electroencephalogram (EEG) – much more practical and convenient than an eye-tracker (not to mention an MRI machine). That meant the signal was a bit noisier than information gained from implants, the researchers admitted – though even then, the tech performed pretty well in trials. Accuracy measurements using the BLEU algorithm – a way to evaluate the similarity of an original text to a machine-translated output by giving it a score between 0 and 1 – put the new tech at about 0.4.
That, admittedly, isn’t as good as some of the other options that depend on these more invasive methods. “The model is more adept at matching verbs than nouns,” explained Yiqun Duan, first author on the paper accompanying the research – and “when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’.”
“We think [these errors are] because when the brain processes these words, semantically similar words might produce similar brain wave patterns,” Duan said.
But the researchers believe they can improve this accuracy up to 0.9 – a level comparable with traditional language translation programs. They already have an advantage, they suspect, due to carrying out their tests on 29 participants – it may not sound like a lot, but it’s an order of magnitude higher than many other decoding tech trials.
“Despite the challenges, our model yields meaningful results,” Duan said, “aligning keywords and forming similar sentence structures.”