Researchers at Columbia University used human neural recordings and predictive models of neural activity to understand how perception of phonetic features is affected in glimpsed and masked speech in multi-talker environments. Glimpsed speech occurs when, during small portions of a stimulus, the primary talker has a better signal-to-noise ratio. This contrasts with continuously-masked speech, which has a poorer signal-to-noise ratio.
The investigators recorded intracranial electroencephalography (iEEG) responses in seven participants who were receiving surgical treatment for epilepsy while listening to a two-talker stimulus located at zero degrees azimuth. Results of the iEEG recordings and predictive models suggested that glimpsed speech was encoded in the primary and secondary auditory cortex, with enhanced encoding of target speech in the secondary auditory cortex. However, there was delayed response latency of the masked speech target, and masked stimuli are only encoded for the target talker in the secondary auditory cortex.
In summary, the results of this study suggest separate brain mechanisms for encoding glimpsed and masked speech and further supports existing models of glimpsed speech.
Reference
Raghavan VS, O’Sullivan J, Bickel S, Mehta AD, Mesgarani N. (2023) Distinct neural encoding of glimpsed and masked speech in multi-talker situations. PLoS Biol 21(6):e3002128.
Recent Posts
AAA 2025+HearTECH Expo: Ready, Set…Go!
And we’re off! The first morning of AAA 2025+HearTECH Expo kicked off with a bang, beginning with our hands-on learning labs. Attendees dove into a…
Cholesterol in Adults
Carroll et al. (2024) used data from the August 2021–August 2023 National Health and Nutrition Examination Survey to estimate the prevalence of both high total…
Audiology Eggcorns: When Misheard Words Take on a Life of Their Own
How many times have you heard a patient use an incorrect word during speech recognition testing? Hundreds? A search of any social media audiology group…