Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Spontaneous Recovery from Radiation Induced Unilateral Profound SNHL
Sensorineural hearing loss (SNHL) is a well-known side effect of radiation therapy for the treatment of cancerous cells or to shrink a mass before surgery. …
Academy Recognizes Winter 2026 Distinguished Fellows of the American Academy of Audiology (DFAAA)
The American Academy of Audiology is proud to announce the winter 2026 class of Distinguished Fellows of the American Academy of Audiology (DFAAA), a prestigious…
The Humpback Whale’s Range of Hearing Is Not What was Believed
Researchers from the University of Queensland, Australia, and the National Marine Mammal Foundation in San Diego, California, recently joined forces to determine the true range of…


