CovOps
Location : Ether-Sphere Job/hobbies : Irrationality Exterminator Humor : Über Serious
| Subject: Artificial intelligence turns brain activity into speech Fri Jan 04, 2019 6:09 pm | |
|
Epilepsy patients with electrode implants have aided efforts to decipher speech. WENHT/ISTOCK.COM
Artificial intelligence turns brain activity into speech
By Kelly ServickJan. 2, 2019 , 1:30 PM
For many people who are paralyzed and unable to speak, signals of what they'd like to say hide in their brains. No one has been able to decipher those signals directly. But three research teams recently made progress in turning data from electrodes surgically placed on the brain into computer-generated speech. Using computational models known as neural networks, they reconstructed words and sentences that were, in some cases, intelligible to human listeners. None of the efforts, described in papers in recent months on the preprint server bioRxiv, managed to re-create speech that people had merely imagined. Instead, the researchers monitored parts of the brain as people either read aloud, silently mouthed speech, or listened to recordings. But showing the reconstructed speech is understandable is "definitely exciting," says Stephanie Martin, a neural engineer at the University of Geneva in Switzerland who was not involved in the new projects. People who have lost the ability to speak after a stroke or disease can use their eyes or make other small movements to control a cursor or select on-screen letters. (Cosmologist Stephen Hawking tensed his cheek to trigger a switch mounted on his glasses.) But if a brain-computer interface could re-create their speech directly, they might regain much more: control over tone and inflection, for example, or the ability to interject in a fast-moving conversation. The hurdles are high. "We are trying to work out the pattern of … neurons that turn on and off at different time points, and infer the speech sound," says Nima Mesgarani, a computer scientist at Columbia University. "The mapping from one to the other is not very straightforward." How these signals translate to speech sounds varies from person to person, so computer models must be "trained" on each individual. And the models do best with extremely precise data, which requires opening the skull.
http://www.sciencemag.org/news/2019/01/artificial-intelligence-turns-brain-activity-speech |
|