• Innovation Implant designed to translate brain signals into words

Talk again. The 'pill' that would be able to return the natural communication capacity to patients suffering from the sequelae of neurological disorders is getting closer. People suffering from the deterioration of amyotrophic lateral sclerosis or stroke, among other strokes; In short, a cerebral palsy that prevents the sending of the orders of this organ to those responsible for the execution of speech, such as the muscles of the lips, tongue, larynx and jaw.

Two parallel articles in Nature by two women with different pathologies, ALS and the aftermath of a stroke, highlight key advances in the recovery of communication through two different systems with the same goal: to restore speech function thanks to a system that translates brain signals that muscles cannot execute.

Find out more

Interview.

"Technology and the marriage between human physiology and technology is going to increase"

  • Writing: CRISTINA RUIZ Madrid
  • Editor: JAVIER BARBANCHO (Photography)

"Technology and the marriage between human physiology and technology is going to increase"

Through both works, a proof of concept is successfully validated, for which there is still a lot of work ahead until a more massive availability. "There is an urgent need to help people with neurological conditions that deprive them of the universal human need to communicate," explain Nick Ramsey and Nathan Crone, in an article attached to the main one in Nature. "The two papers constitute a crucial proof of concept that communication can be restored using implantable BCIs, but several issues require further research to enable wider dissemination.

How do the two systems work?

Pat Bennett has four sensors the size of baby aspirin implanted. The devices transmit signals from a pair of speech-related regions in Bennett's brain to state-of-the-art software that decodes his brain activity and converts it into text displayed on a computer screen. "These initial results have validated the concept and, eventually, the technology will catch up to make it easily accessible to people who can't speak," Bennett said. "This means we can stay connected to the world at large, maybe continue to work, maintain relationships with family and friends."

This has been possible thanks to the research of the team of Francis Willett, from Stanford University, who has developed a BCI (or brain-computer interfaces) that collects the neural activity of individual cells with a series of thin electrodes inserted into the brain and trained an artificial neural network.to decode the intended vocalizations. "This system is trained to know which words should go before others and which phonemes form which words," Willett explains in the institutional statement. "If some phonemes were interpreted incorrectly, an optimal guess can still be made."

With the help of the device, Bennett has managed to communicate at an average speed of 62 words per minute, which is 3.4 times faster than the previous record of a similar device and approaches the speed of a natural conversation, which is around 160 words per minute. The system achieved a word error rate of 9.1% in a 50-word vocabulary, which is 2.7 times fewer errors than the previous state-of-the-art voice BCI of 2021. A word error rate of 23.8% was achieved in a vocabulary of 125,000 words.

On the other hand, just over 60 kilometers away, at the University of San Francisco, Edward F. Chang's team has also successfully launched another proof of concept of a BCI. In this case, they have used 253 nonpenetrating electrodes that are placed on the surface of the brain and detect the activity of many cells at sites throughout the speech cortex. This BCI decodes brain signals to generate three outputs simultaneously: text, audible voice, and a talking avatar. "Our goal is to restore a form of full communication, which is really the most natural way for us to talk to others," Chang said in a statement. "These advances bring us much closer to making this a real solution for patients."

The research participant in Chang's study hooked up to computers that translate her brain signals as she tries to speak into the speech and facial movements of an avatar. Left, UCSF clinical research coordinator Max Dougherty.NOAH BERGERUSCF

The researchers trained a deep-learning model to decipher neural data collected from a patient with severe paralysis, as she attempted to utter complete sentences silently. Brain-to-text translation generated an average speed of 78 words per minute, which is 4.3 times faster than the previously set record and even closer to the speed of natural conversation. The system achieved a word error rate of 4.9% when decoding sentences from a set of 50 sentences, which is five times fewer errors than the previous state-of-the-art speech BCI.

Other figures that positively consolidate the progress are: a word error rate of 25% when decoding sentences in real time with a vocabulary of more than 1000 words; Offline simulations showed a 28% word error rate using vocabulary with more than 39,000 words.

Brain signals were also directly translated into intelligible synthesized speech sounds that untrained listeners could understand, with a 28% word error rate for a set of 529 phrases, and were customized based on the participant's voice before injury.

One of the advantages of the San Francisco team compared to Stanford is the expressiveness achieved through an avatar. The system decoded neural activity in facial movements during speech, as well as in nonverbal expressions. For months a stable and high-performance decoding was demonstrated. Overall, this multimodal BCI offers more possibilities for people with paralysis to communicate in a more naturalistic and expressive way.

A new step towards speech 'restoration'

These works are the fruit of other previous BCI advances capable ofecoding brain activity in speech more quickly, more accurately and covering a wider vocabulary than existing technologies.

In the case of Chang's team, he previously demonstrated that it was possible to decode brain signals in text in a man who had also suffered a stroke many years earlier. This key step was published in The New England Journal of Medicine two years ago. The current study demonstrates something more ambitious: decoding brain signals in speech richness, along with movements that animate a person's face during a conversation.

In 2021, also the team of researchers from Stanford University published a work in Nature that showed that the possibility of decoding speech from the brain activities of a person with paralysis, but only in the form of text and with limited speed, precision and vocabulary.

Ramsey and Crone conclude that the two interfaces "represent a breakthrough in neuroscience and neuroengineering research, and hold great promise for alleviating the suffering of people who have lost their voices as a result of crippling neurological injuries and illnesses." In addition, they point out that more work is required for its use to be massive to many types of patients.

  • Neurology