IE 11 is not supported. For an optimal experience visit our site on another browser.

Woman with paralysis speaks through an avatar 18 years after a stroke, thanks to a brain implant and AI

A pair of new experiments helped people with paralysis communicate audibly in close to real time. 
Ann, a participant in Dr. Eddie Chang’s study of speech neuroprostheses, uses a digital link wired to her cortex to speak through an avatar in El Cerrito, Calif., on May 22, 2023.
Ann Johnson, a participant in Dr. Eddie Chang’s study of speech neuroprostheses, uses a digital link wired to her cortex to speak through an avatar in El Cerrito, Calif., on May 22.Noah Berger

In 2005, Ann Johnson suffered a stroke that left her severely paralyzed and unable to speak. She was 30.

At best, she could make sounds like “ooh” and “ah,” but her brain was still firing off signals.

Now, in a scientific milestone 18 years after Johnson's stroke, an experimental technology has translated her brain signals into audible words, enabling her to communicate through a digital avatar.

The technology, developed by researchers at the University of California, San Francisco, and the University of California, Berkeley, relies on an implant placed on the surface of Johnson's brain in regions associated with speech and language.

The implant, which Johnson received in an operation last year, contains 253 electrodes that intercept brain signals from thousands of neurons. During the surgery, doctors also installed a port in Johnson's head that connects to a cable, which carries her brain signals to a computer bank.

The computers use artificial intelligence algorithms to translate the brain signals into sentences that get spoken through a digitally animated figure. So when Johnson tried to say a sentence like “Great to see you again,” the avatar on a nearby screen uttered those words out loud.

The system appears to be significantly faster and more accurate than previous technologies that attempted similar feats, and it allowed Johnson to communicate using a relatively expansive vocabulary.

The researchers used a recording of Johnson speaking at her wedding to personalize the avatar’s voice. The system also converted Johnson's brain signals into facial movements on the avatar, such as pursed lips, and emotional expressions, like sadness or surprise.

The results of the experiment were published Wednesday in the journal Nature.

UCSF clinical research coordinator Max Dougherty connects a neural data port in Ann’s head in El Cerrito, Calif., on May 22, 2023.
University of California, San Francisco, clinical research coordinator Max Dougherty connects a neural data port in Johnson's head in El Cerrito, Calif., on May 22.Noah Berger

Dr. Edward Chang, an author of study who performed Johnson’s surgery, said he was “absolutely thrilled” to watch her communicate through the avatar.

“There’s nothing that can convey how satisfying it is to see something like this actually work in real time,” Chang, the chair of neurological surgery at UCSF, said at a news briefing.

The technology converted Johnson's speech attempts into words at nearly 80 words per minute. Chang said the natural rate of speech is around 150 to 200. It had a median accuracy of around 75% when Johnson used a 1,024-word vocabulary.

In a feedback survey, Johnson wrote that she was emotional upon hearing the avatar speak in a voice similar to hers.

"The first 7 years after my stroke, all I used was a letterboard. My husband was so sick of having to get up and translate the letterboard for me," she wrote.

Ann, a research participant in the Eddie Chang study of speech neuroprostheses, uses a digital link wired to her cortex to interface with an avatar in El Cerrito, Calif., on May 22, 2023.
Johnson uses a digital link wired to her cortex to interface with an avatar in El Cerrito, Calif., on May 22.Noah Berger

Going into the study, she said, her moonshot goal was to become a counselor and use the technology to talk to clients.

"I think the avatar would make them more at ease," she wrote.

The technology isn’t wireless, however, so it hasn't yet advanced enough to integrate into Johnson’s daily life.

Two parallel studies show how brain implants can enable speech

A second study, also published Wednesday in Nature, similarly helped a woman with paralysis communicate in close to real time.

The subject, Pat Bennett, has Lou Gehrig’s disease, or amyotrophic lateral sclerosis, a neurological condition that weakens muscles. Bennett can still move around and dress herself, but she can no longer use the muscles in her mouth and throat to form words.

After they implanted two small sensors on her brain, researchers at Stanford University trained a software program to decode signals from individual brain cells and convert them into words on a computer screen. As in the first study, the sensors were connected to the computer by a cable.

The technology converted Bennett's speech attempts into words at a rate of 62 words per minute, and it was about 91% accurate when she used a 50-word vocabulary. But the accuracy fell to around 76% when she used a 125,000-word vocabulary — meaning 1 out of every 4 words was wrong.

“Eventually technology will catch up to make it easily accessible to people who cannot speak,” Bennett wrote in a statement. “For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships.”

Brain-computer communication isn't perfect yet

Experiments that use electrodes to read brain signals date to the late 1990s, but the research field has made major strides in recent years.

In 2021, the Stanford team behind Bennett's experiment used a brain implant and artificial intelligence software to translate brain signals involved in handwriting from a paralyzed man into text on a computer screen. The same year, Chang's research group at UCSF demonstrated for the first time that it could successfully translate brain signals from a man with severe paralysis directly into words.

But the two new experiments described in Nature enabled much faster communication than previous attempts.

“With these new studies, it is now possible to imagine a future where we could restore fluent conversation to someone with paralysis, enabling them to freely say whatever they want to say with an accuracy high enough to be understood reliably,” the lead author of the Stanford study, Francis Willett, a staff scientist at Stanford’s Neural Prosthetics Translational Laboratory, said at the briefing.

Pat Bennett, bottom, participates in a research session.
Pat Bennett, bottom, participates in a research session.Steve Fisch

An editorial published Wednesday alongside the two studies highlighted several challenges to making the technologies widely available, however.

First, it noted that both participants can still move their facial muscles and make sounds to some degree, so it’s unclear how the systems would perform in people without any residual movement. Second, it questioned whether the technology could be operated by anyone other than a highly skilled researcher.

The systems “remain too complicated for caregivers to operate in home settings without extensive training and maintenance,” the editorial said.

Dr. Jaimie Henderson, a neurosurgery professor at Stanford who performed Bennett’s operation, acknowledged the limitations but said there’s plenty of room to improve the advances further.

"I think implantable, surgically placed technologies are here for at least the foreseeable future," Henderson said.

His long-term goal, he said, is to ensure that people with Lou Gehrig's disease never lose the ability to communicate.