Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Celebrating Community and Giving Back in The Big Easy
The AAA Foundation had a fantastic showing at AAA 2025+HearTECH Expo in New Orleans! With incredible supporters, generous sponsors, and fun-filled events, it was a…
Contact Your Representative: Support the Medicare Audiology Legislation
The Medicare Audiology Access Improvement Act (H.R. 2757) was reintroduced in the U.S. House—and we need your help to gain critical momentum. This bipartisan bill…
Academy Fights Hearing Aid Specialists Scope Expansion in Nevada
The Academy submitted a letter to the Nevada legislature expressing strong opposition to Assembly Bill 177. The bill proposes expanding the scope of practice for…