Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
The Humpback Whale’s Range of Hearing Is Not What was Believed
Researchers from the University of Queensland, Australia, and the National Marine Mammal Foundation in San Diego, California, recently joined forces to determine the true range of…
Academy Calls for FY 2027 Investments in Hearing Health
As Congress develops the fiscal year (FY) 2027 Labor, Health and Human Services, and Education Appropriations bill, the Academy is urging targeted investments to strengthen…
CMS Moves to All Electronic Filing for Claims Documentation
The Centers for Medicare and Medicaid Services (CMS) announced on March 20, 2026, the Administrative Simplification, Adoption of Standards for Health Care Claims Attachments Transactions…


