Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Medicaid in Focus: What Audiologists Need to Know Now
With the recent changes to Medicaid, the Academy is preparing audiologists with the new one-pager, “Audiology in Medicaid”, an exclusive member resource designed to educate on…
American Academy of Audiology Clinical Consensus Statement: Assessment of Vestibular Function in the Pediatric Population
Authors: Violette Lavender, AuD, Kristen Janky, PhD, Katheryn Bachmann, PhD, Melissa Caine, AuD, Micheal Castiglione, AuD, Guangwei Zhou, ScD The American Academy of Audiology Clinical…
CMS Releases CY 2026 Hospital Outpatient Prospective Payment System and Ambulatory Surgical Center Proposed Rule
On July 15, the Centers for Medicare and Medicaid Services (CMS) released the calendar year (CY) 2026 Hospital Outpatient Prospective Payment System (OPPS) and Ambulatory Surgical Center (ASC) Payment System proposed rule,…