Using AI and brain implants, researchers reconnect ALS patient with their lost voice

Using AI and brain implants, researchers reconnect ALS patient with their lost voice

Researchers at UC Davis have used a brain-computer interface implant to help restore a patient’s voice, after ALS had slowly robbed him of his ability to speak clearly.

Using neural sensors developed by Blackrock Neurotech—and AI text-to-speech software designed to sound like his own voice, based on recordings taken from years before—45-year-old Casey Harrell was able to say, “It feels a lot like me… It makes people cry, who have not heard me in a while.”

The technology does not read a person’s thoughts or aim to translate their inner voice. Instead, it aims to capture the commands the brain sends to the muscles used in speech.

“And we are basically listening into that, and we’re translating those patterns of brain activity into a phoneme—like a syllable or the unit of speech—and then the words they’re trying to say,” Sergey Stavisky, co-director of the UC Davis Neuroprosthetics Lab, said in a statement.

Researchers said their approach has been more than 97% accurate—competing closely with the about 95% rate seen with commercial smartphone applications working in the opposite direction, as they try to translate a person’s voice into digital commands.

Blackrock’s NeuroPort array, which previously received a breakthrough designation from the FDA, includes 64 electrodes placed through the skull and into the cortex; four were implanted in a July 2023 procedure. After recording data through dozens of sessions over a period of eight months, the researchers said that Harrell was able to communicate within minutes of turning the system on—both through prompts and spontaneous conversations, with a vocabulary of more than 125,000 words.

“Previous speech BCI systems had frequent word errors. This made it difficult for the user to be understood consistently and was a barrier to communication,” said neuroprosthetics lab co-director David Brandman. “Our objective was to develop a system that empowered someone to be understood whenever they wanted to speak.”

The researchers’ work, as part of the BrainGate clinical trial consortium, was published this week in the New England Journal of Medicine. Harrell was also interviewed on the NEJM’s Intention to Treat podcast, using his own voice.

BrainGate’s researchers, who have employed a variety of BCI technologies over the past two decades among several institutions, also showed last year that a patient with paralysis was able to generate text on a computer screen at a rate of 62 words per minute.

“Casey and our other BrainGate participants are truly extraordinary,” said BrainGate’s director, Leigh Hochberg, of Brown University’s Carney Institute for Brain Science. “They deserve tremendous credit for joining these early clinical trials. They do this not because they’re hoping to gain any personal benefit, but to help us develop a system that will restore communication and mobility for other people with paralysis.”

 

Share:
error: Content is protected !!