IE 11 is not supported. For an optimal experience visit our site on another browser.

Device turns thoughts into speech

Scientists embed an electrode directly into a patient's brain and then develop a computer system to read his thoughts.
Image: Brain-to-speech translator
A schematic shows how signals are transmitted from the brain to a real-time speech synthesizer.Guenther et al., PLoS ONE
/ Source: Discovery Channel

Scientists have successfully tested a system that translates brain waves into speech, raising the prospect that people left mute by stroke, Lou Gehrig's disease and other afflictions will one day be able to communicate by synthetic voice.

The system was tested on a 26-year-old man left paralyzed by a brain stem stroke, but with his consciousness and cognitive abilities intact. The condition is known as "locked-in syndrome." In this condition, communication by eye movement or other limited motion is possible, but extremely cumbersome.

For example, British theoretical physicist Stephen Hawking, who is nearly completely paralyzed as a result of amyotrophic lateral sclerosis, or Lou Gehrig's disease, takes several minutes to compose a short sentence that is rendered into speech by a computer.

Scientists implanted an electrode about 5 millimeters deep into the part of the subject's brain responsible for planning speech. After a few months nerve cells grew into the electrode, producing detectable signals.

It took several years, however, to develop a computer system that could discriminate elements of speech from the busy backdrop of neural activity, lead researcher Frank Guenther, with the Department of Cognitive and Neural Systems at Boston University, told Discovery News.

"All the neurons are firing all the time, but there's a subtle change in the firing rates. The trick was trying to decode that," Guenther said.

The first "words" detected from the subject's brain were three vowel sounds, but the speed with which the speech thought was transmitted into audible sound was about 50 milliseconds -- the same amount of time it typically takes for naturally occurring speech.

The embedded electrode amplifies neural signals and converts them into FM radio waves which are then transmitted wirelessly across the subject's scalp to two coils on his head that serve as receiving antennas.

The signals are then routed into a system that digitizes, sorts and decodes them. The results are fed into a program that synthesizes speech which runs on desktop or laptop computer.

"The most significant thing is that this shows it would be possible for someone who is paralyzed to speak in real-time rather than going through a painful typing process," Guenther said. "This communication is very important because these people are completely locked out from the rest of the world."

The researchers plan a follow-up study in early 2010 that will significantly increase how much information is collected from the brain, with the aim of adding consonants, and then words, to the speech prosthesis.

"The human brain function is very complicated," said Hui Mao, associate professor of radiology at Emory University School of Medicine. "So far we've only scratched the surface. We're recording simple brain processes at this point, but the proof of principle and the demonstration that this works opens the opportunity for different experts to come into the field."

The research was published Dec. 9 in the online science journal PLoS ONE.