Wireless brain-machine interfaces could one day scan minds in real-time for speech data to help people with brain injuries talk, new research suggests.
Don't miss these Health stories
More women opting for preventive mastectomy - but should they be?
- Larry Page's damaged vocal cords: Treatment comes with trade-offs
- Report questioning salt guidelines riles heart experts
- CDC: 2012 was deadliest year for West Nile in US
- What stresses moms most? Themselves, survey says
- More women opting for preventive mastectomy - but should they be?
Recently, scientists have developed brain-machine interfaces that help restore communication to people who can no longer speak by reading brainwaves using electrodes stuck on their heads. Unfortunately, these have proved very slow, at roughly one word typed per minute, making normal conversations and social interactions virtually impossible.
Now cognitive neuroscientist Frank Guenther at Boston University and his colleagues reveal a brain-machine interface that uses electrodes implanted directly into the brain for research into real-time speech.
"It should soon be possible for profoundly paralyzed individuals who are currently incapable of speaking to produce speech through a laptop computer," Guenther told LiveScience.
The scientists worked with a 26-year-old male volunteer who experiences near-total paralysis due to a stroke he suffered when he was 16 years old. They implanted an electrode that had two wires into a part of the brain that helps plan and execute movements related to speech.
The electrode recorded brain signals when the volunteer attempted to talk and wirelessly transmitted them across the scalp to help drive a speech synthesizer. The delay between brain activity and sound output was just 50 milliseconds on average, roughly that seen with regular speech.
"He was quite excited, particularly on the first few days we used the system, as he got used to its properties," Guenther recalled. "I am sure the work seems to proceed slowly from his perspective, as it does from ours at times. Nonetheless he was very excited about getting real-time audio feedback of his intended speech and happy to work very hard with us throughout the experiments."
The researchers focused on vowels, since the sound components involved have been studied for decades and software is available to quickly synthesize them. The accuracy of the volunteer's productions of vowels with the synthesizer improved quickly with practice from 45 to 89 percent accuracy over the course of 25 sessions in five months.
"Our volunteer was able to produce vowel-to-vowel sequences like 'uh-ee,' which are relatively easy speech 'movements,'" Guenther explained. "The next challenge is consonant production. This will require a different kind of synthesizer — an articulatory synthesizer, where the user will control movements of a 'virtual tongue.'"
"Such a synthesizer will allow whole words to be produced, but at the cost of a more complicated system for the user to control," he continued. "This, coupled with increases in the number of electrodes that can be recorded from and transmitted across the scalp, should eventually lead to a system that will allow the user to produce words and whole sentences."
The current system uses data from just two wires. "Within a year it will be possible to implant a system with 16 times as many," Guenther said. "This will allow us to tap into many more neurons, which in the end means much better control over a synthesizer and thus much better speech."
© 2012 LiveScience.com. All rights reserved.