Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
State Policy Advocate Network Kicks Off 2026
The State Policy Advocate Network (SPAN) will hold its first meeting of 2026 on January 28 from 8:00–9:00 pm ET. This opening meeting of the…
Developmental Timing of Auditory Deprivation Influences Spatial Memory and Hippocampal Plasticity in Rats
Mirsalehi et al. (2025) published a recent article studying auditory deprivation and related changes in spatial memory and hippocampal structure in rats. This study initially…
Does One Drink Make You Dizzy? Why Alcohol Hits Us Harder as We Age
In the article, “Does one drink make you dizzy? Why alcohol hits us harder as we age,” National Public Radio (NPR) correspondent Maria Godoy discusses…


