Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Update on Hearing Device Services Codes
As released publicly in the March 10, 2026, AMA’s Errata & Technical Corrections CPT 2026, the parentheticals related to code 92628 (Evaluation for hearing candidacy)…
Intratympanic Steroid Therapy as a Salvage Treatment for Sudden Sensorineural Hearing Loss
Fernandez et al. (2026) completed a retrospective analysis of 86 patients seen between 2019 and 2024 with sudden sensorineural hearing loss (SSNHL). This analysis compared…
Clinical Superiority of Belly-Tendon Montage Over Others for Recording Air-Conducted Ocular Vestibular Evoked Myogenic Potential
In a recent study published by Raveendran and Singh (2026), a number of ocular vestibular evoked myogenic potential (oVEMP) electrode montages were compared. This study…


