Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Academy, ADA, and ASHA Announce the Introduction of MAAIA in the Senate
The American Academy of Audiology, Academy of Doctors of Audiology (ADA), and American Speech-Language-Hearing Association (ASHA) are pleased to announce the introduction of the Medicare…
Admission Rates of Neonatal Intensive Care Units in the United States
Pediatric audiology case-history questionnaires often ask about birth history and time spent in a neonatal intensive care unit (NICU). As such, audiologists who routinely see…
A New Flexible Auditory Brainstem Implant
An auditory brainstem implant (ABI) provides the sensation of sound to individuals who are deaf due to neurofibromatosis 2 (NF2) or a severely compromised or…