Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Related Posts
Sound Check: Mapping Hearing Loss in the United States
The National Opinion Research Center (NORC) at the University of Chicago is nonpartisan a research organization that aims to provide objective social science and data…
New Study Reveals Prevalence of Bilateral Hearing Loss in United States
A new study published reveals the prevalence of bilateral hearing loss in the United States by severity, age, state, county, sex, ethnicity, and residency. Regarding…
Children’s Hospital of Philadelphia Performs First Gene Therapy Procedure to Treat Genetic Hearing Loss in United States
There are more than 150 different genes that have been identified as causing hearing loss. A rarer gene, the otoferlin (OTOF) gene, was identified in…