Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Closing Strong: Saturday at AAA 2026
Welcome to the final day of AAA 2026! Saturday is your last opportunity to take in cutting-edge education, connect with colleagues, and reflect on a…
Friday at AAA 2026
Welcome to day three of AAA 2026! Friday keeps the momentum strong with advanced clinical education, research insights, leadership development, and meaningful networking opportunities. Make…
Day Two at AAA 2026: Keep the Momentum Going
Welcome to day two of the AAA Annual Convention 2026! After an exciting kickoff, we’re diving even deeper into education, innovation, and meaningful connections. Thursday…


