Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Rock the PAC: An Evening of Music, Networking, and Advocacy
This content is an exclusive benefit for American Academy of Audiology members. If you’re a member, log in and you’ll get immediate access. Member Login…
Amplifying Audiology’s Voice: Advocacy Ambassador Program
The Academy is launching the Advocacy Ambassador Program, a grassroots advocacy program to strengthen audiology’s voice nationwide. The program will debut at the AAA Annual…
Cut Through the Noise
Let’s face it…there is a lot going on at AAA 2026, so it can be overwhelming to determine what works best for you and meets your needs. And so,…


