Giving a voice to all
UCSF and UC Berkeley researchers have demonstrated new ways that brain-computer interfaces (BCIs) can make communication faster and easier for people who have lost the ability to speak due to severe paralysis.
In a study published Nov. 8 in Nature Communications, a team of researchers in the UCSF-UC Berkeley graduate program in bioengineering designed a BCI that allows a user to silently spell out sentences from a vocabulary of more than 1,000 words in real time with nearly 100% accuracy. This silent, spelling-based approach represents an advancement in BCI research that could someday help users communicate more naturally and with less effort.
“These results show that it is possible to use speech BCIs to drive high-accuracy, large-vocabulary communication,” said Sean Metzger, a co-lead author of the study and graduate student in the UCSF-UC Berkeley program. “It brings speech BCIs closer to clinical viability and means that they are closer to being ready for day-to-day use by patients.”
This work builds off a groundbreaking study from 2021 led by UCSF neurosurgeon Edward Chang that demonstrated BCIs can successfully translate brain activity directly into text in a person with paralysis. The co-lead authors of both studies are Metzger; Jessie Liu, graduate student in the UCSF-UC Berkeley program; and David Moses, a postdoctoral engineer in the Chang lab at UCSF.
The 2021 study had demonstrated that a person with paralysis who was unable to speak, move or type could control a BCI to generate full sentences simply by trying to say the desired words. This initial proof of concept, however, was limited to a preliminary vocabulary of 50 words and required the participant to try to vocalize, which took effort.
In this new study, researchers wanted to see if it was possible for the same participant to silently spell out sentences from a much larger vocabulary. The clinical trial participant would silently attempt to say NATO code words for each letter of the Roman alphabet — such as Alpha for A, Bravo for B, Charlie for C and so on — to spell out his intended words and sentences while researchers recorded signals from his brain.
The BCI then used custom machine learning algorithms to translate his brain activity directly into text, decoding the participant’s intended sentences from a vocabulary of over 1,000 words in real-time and at a high rate of accuracy.
“We were able to decode these sequences of code words — for example, Charlie-Alpha-Tango for cat — with 94% accuracy,” said Metzger. “In addition, offline simulations showed this approach could still work with a vocabulary of over 9,000 words while maintaining an accuracy up to 92%.”
According to Metzger, this study demonstrates a practical approach to achieving high-accuracy communication using speech BCIs while offering future users the promise of more fluid interaction. “Each advancement brings us one step closer to improving communication and providing greater autonomy to people who have lost the ability to speak,” he said.
Other UC Berkeley co-authors of this study are Gopala Anumanchipalli, assistant professor, and Kaylo Littlejohn, a Ph.D. student, both in the Department of Electrical Engineering and Computer Sciences. They are also affiliated with the Department of Neurological Surgery at UCSF and the UCSF Weill Institute for Neurosciences.