Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Join Your Community at AAA 2026
Whether you come for the evidence-based education, a plethora of practice management sessions applicable to all practice settings, the opportunity to explore the latest technology…
Academy Supports Louisiana Legislation to Update Audiology Scope of Practice
The Louisiana legislature is currently considering House Bill 925 (HB 925), introduced during the 2026 regular session. This legislation aims to update statutory definitions related…
Update on Hearing Device Services Codes
As released publicly in the March 10, 2026, AMA’s Errata & Technical Corrections CPT® 2026, the parentheticals related to code 92628 (Evaluation for hearing candidacy)…


