Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
ASLP-IC Readies for Rollout: Here’s What You Need to Know
The Audiology and Speech-Language Pathology Interstate Compact (ASLP-IC) continues to move toward full implementation, expanding opportunities for audiologists and speech-language pathologists to practice across state…
How Do Animals Perceive Music?
Music can be defined as vocal, instrumental, or mechanical sounds, with rhythm, melody or harmony, and often, an expression of human emotion. Music can transcend…
‘Eye’ on Health: AI Detects Dizziness and Balance Disorders Remotely
Interesting research led by audiologist Ali Danesh, PhD, at Florida Atlantic University (FAU) helped develop a novel, proof-of-concept tool to help identify nystagmus using a…