New Technology That Transforms Brain Signals Into Speech May Give Voice To People With Parkinson's، Throat Cancer
Friday 26/April/2019 - 10:14 PM
Technology has advanced so greatly that even patients who have completely lost their voice could potentially have it restored soon، as the Technical Times said.
In fact، scientists have already developed a computer-based system that can translate brain activity into speech.
Someday in the future، this system could help individuals who have lost their speech through various conditions، such as Parkinson's disease، throat cancer، and paralysis.
"Speech is an amazing form of communication that has evolved over thousands of years to be very efficient،" said Edward F. Chang، M.D.، senior author of the study and professor of neurological surgery at the University of California، San Francisco.
"Many of us take for granted how easy it is to speak، which is why losing that ability can be so devastating. It is our hope that this approach will be helpful to people whose muscles enabling audible speech are paralyzed."
Scientists Develop Computer-Generated Speech Translator In a study published in the journal Nature، researchers shared the details of their new technology. First، the team collected recordings of the brain activity of epilepsy patients without speech problems and who are scheduled to undergo surgery.
The researchers had each patient speak or mime in full sentences، then they constructed maps on how the brain directs the entire vocal system to make sounds. The second step involved the maps getting applied to a computer program that produced the speech.
Volunteers listened to the computer-generated speech and asked to transcribe what they heard. In more than half of the times، they were successful in understanding what the computer program was trying to say.
Amazingly، the second step of translating the vocal maps into sounds seems to be generalizable and accurate even across patients. Since it would be difficult to get vocal maps from paralyzed patients، it's fortunate that data from non-paralyzed individuals could be used in the system.
Findings show that even just miming speaking was enough for the computer to generate some of the same sounds. Edited by Ahmed Moamar