Speak and spell
Aiming to help people who have lost the ability to speak due to severe paralysis, researchers in the UCSF-UC Berkeley bioengineering graduate program have developed a new way for brain-computer interfaces (BCIs) to make communication faster and easier. Building off a groundbreaking 2021 study led by UCSF neurosurgeon Edward Chang, the team designed a BCI that allows users to silently spell out sentences from a vocabulary of more than 1,000 words in real time with nearly 100% accuracy.
The 2021 study, in which users with paralysis controlled a BCI to generate full sentences by trying to say the desired words, was limited to a preliminary vocabulary of 50 words and required the participant to try to vocalize. In this new study, researchers wanted to see if it was possible for the same participant to silently spell out sentences from a much larger vocabulary. The participant would silently attempt to say NATO code words for each letter of the Roman alphabet to spell out intended words and sentences while researchers recorded signals from their brain. The BCI then used custom machine learning algorithms to translate brain activity directly into text.
“We were able to decode these sequences of code words — for example, Charlie-Alpha-Tango for cat — with 94% accuracy,” said Sean Metzger, Ph.D. student and co-lead author of both studies. “In addition, offline simulations showed this approach could still work with a vocabulary of over 9,000 words while maintaining an accuracy up to 92%.”
The other co-authors of both studies are Jessie Liu, graduate student in the UCSF-UC Berkeley program, and David Moses, a postdoctoral engineer at UCSF. Other co-authors of this study are Gopala Anumanchipalli, assistant professor, and Kaylo Littlejohn, a Ph.D. student, both from the Department of Electrical Engineering and Computer Sciences.
Learn more: Giving a voice to all; Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis (Nature Communications)