Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Hearing Loss Linked to Lower Income in Young Adults
Hearing loss, as well as tinnitus, has been correlated with socioeconomic factors such as reduced income and unemployment (Nadler, 2023). This correlation is even more impactful…
Tympanoplasty in Children: A Retrospective Study
The purpose of a tympanoplasty is to repair a hole in a tympanic membrane (TM). An important functional outcome of this surgery is to improve…
Sounds That Thrill or Chill
As we are about to enter the fall and are getting closer to Halloween, some of you may be looking for some thrilling experiences. If…