Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Your Support Makes the Difference—Let’s Finish the Year Strong
As we wrap up the year, I want to thank you for your generosity supporting the AAA Foundation’s work. The enclosed report highlights what you…
Audiology Faces New Challenges Under Draft Federal Loan Rule: What Comes Next
Member Action Needed Soon! The U.S. Department of Education’s Advisory Committee has reached consensus on proposed regulations implementing the higher education provisions of the One…
Unlock the True Worth of Your Expertise
New Amplify Your Value Track at AAA 2026 Designed for audiologists and practice leaders, our new Amplify Your Value track empowers you to rethink how…


