Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Clinical Decision Support for Vestibular Diagnosis: Large-Scale Machine Learning with Lived Experience Coaching
Pastor et al. (2025) developed a machine learning system (MLS) to help make a vestibular diagnosis based on patient symptoms. They utilized diagnostic data from…
Academy Submits Comments on Proposed Revisions to MPFS CY 2026
In response to the Centers for Medicare and Medicaid Services (CMS) Notice of Proposed Rulemaking (NPRM) regarding revisions to Medicare payment policies under the Medicare…
CDC Director Monarez Fired
On August 27, the Centers for Disease Control and Prevention (CDC) Director Susan Monarez was abruptly fired after serving less than a month in her…