Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
What Breakthroughs Are Coming to AAA 2026?
AAA 2026 will be delivering some of the most timely, innovative, and practice-shaping content in audiology in San Antonio. This year’s Featured Sessions will spotlight…
Your Professional Growth Starts Here
Ready to level up your career? You told us professional and leadership growth matters—especially for those newer to the field and eager to build their…
Termination of Federal EHDI Grants
Earlier this week, the Academy learned that seven federal cooperative agreements funded by the U.S. Health Resources and Services Administration (HRSA) are being terminated. Despite…


