Researchers at Columbia University used human neural recordings and predictive models of neural activity to understand how perception of phonetic features is affected in glimpsed and masked speech in multi-talker environments. Glimpsed speech occurs when, during small portions of a stimulus, the primary talker has a better signal-to-noise ratio. This contrasts with continuously-masked speech, which has a poorer signal-to-noise ratio.
The investigators recorded intracranial electroencephalography (iEEG) responses in seven participants who were receiving surgical treatment for epilepsy while listening to a two-talker stimulus located at zero degrees azimuth. Results of the iEEG recordings and predictive models suggested that glimpsed speech was encoded in the primary and secondary auditory cortex, with enhanced encoding of target speech in the secondary auditory cortex. However, there was delayed response latency of the masked speech target, and masked stimuli are only encoded for the target talker in the secondary auditory cortex.
In summary, the results of this study suggest separate brain mechanisms for encoding glimpsed and masked speech and further supports existing models of glimpsed speech.
Reference
Raghavan VS, O’Sullivan J, Bickel S, Mehta AD, Mesgarani N. (2023) Distinct neural encoding of glimpsed and masked speech in multi-talker situations. PLoS Biol 21(6):e3002128.
Recent Posts
Deaths from Falls by Older Adults
As part of an audiologist’s scope of practice, they may assess a patient’s risk for falls. Audiologists, therefore, are likely aware of the hazards related…
Message from the Program Chair: AAA 2026 Call for Abstracts
On behalf of the American Academy of Audiology, I am excited to announce that the AAA 2026 Call for Abstracts opened August 19. As chair…
Plants Can Scream? And Can Animals Hear Them?
Two years ago, a team of scientists from Tel Aviv University were the first to show that plants scream when they are distressed or unhealthy…