Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Audiology Eggcorns: When Misheard Words Take on a Life of Their Own
How many times have you heard a patient use an incorrect word during speech recognition testing? Hundreds? A search of any social media audiology group…
Advocacy at AAA2025: Don’t Miss These Key Sessions
Ready to take your advocacy efforts to the next level? AAA 2025+HearTECH Expo in New Orleans is your chance to stay updated on key audiology…
Dr. Oz Seeks to Become CMS Administrator
The Centers for Medicare and Medicaid Services (CMS) administrator oversees Medicare, Medicaid, and private insurance plans at Healthcare.gov. Combined, those programs provide health insurance to…