Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Academy Presents Inaugural AAA National Health Leadership Award to Representative Bilirakis
Today, Patrick Gallagher, Executive Director of the American Academy of Audiology, presented the inaugural AAA National Hearing Health Leadership Award to Representative Gus Bilirakis (R-FL)…
2026 Academy Honors and Awards Recipients
Every year, the Academy asks colleagues, friends, and mentees to look around their professional circles and identify members who are deserving of recognition for outstanding…
Turn Insight Into Action! Attend Learning Labs at AAA 2026
Ready to take your professional development to the next level? At AAA 2026, Learning Labs are your chance to go beyond lectures and dive into…


