Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Tinnitus Severity Linked to Mood, Sleep, and Personality
Tinnitus affects approximately 10 percent of the U.S. adult population and 14 percent of the world’s population (NIDCD, 2025). Tinnitus can sound different to individuals,…
Academy Board of Directors Meets with Lawmakers During Government Shutdown
On October 23, the Academy Board of Directors held a successful Hill Day in Washington, D.C. amid the federal government shutdown. The Academy is deeply…
Influence of the Electrical Dynamic Range (EDR) on Speech Perception, Vocabulary, and Quality of Life in Young Children
A person’s electrical dynamic range (EDR) in a cochlear implant (CI) is defined as the difference between the highest tolerable current level, without discomfort or…



