Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Related Posts
Recent Posts
A Virtual Reality System for Delivery of Military-Specific Vestibular Rehabilitation After Mild Traumatic Brain Injury: The Praxis Study Protocol
In an article by Alroumi et al. (2025), treatment of mild traumatic brain injury (mTBI) through the use of virtual reality (VR) system was investigated….
From Capitol Hill to Your Clinic: SPAN July Meeting on Medicaid Cuts
The State Policy Advocacy Network (SPAN) will convene again on July 29 for a critical meeting about Medicaid funding. SPAN brings together nationwide audiologists and…
EHDI Program at Risk
On April 1, President Trump’s administration eliminated the entire branch of the Early Hearing Detection and Intervention (EHDI) program that works with states to analyze…