P21Session 1 (Thursday 12 January 2023, 15:30-17:30)Neural processing of amplitude modulations in normal-hearing adult listeners: Is there a link with the ability to identify consonants in noise?
Psychoacoustic research has highlighted the fundamental role of temporal modulations for speech perception in noisy environments. The present study seeks for a relationship between neural mechanisms underlying AM processing and speech perception in noise. Using electroencephalography (EEG), the AM following response (AMFR, or envelope following response), an auditory potential reflecting the brain activity following the modulation frequency of amplitude modulated tones, can be recorded at the scalp level. If AM processing relates to speech-in-noise abilities at early sensory stages, it was hypothesized that higher magnitudes of AMFR would positively correlate with lower (better) speech-in-noise thresholds. Moreover, this relationship was expected to differ as a function of AM rates, as speech information is mainly conveyed by slow AM cues.
Thirty-five young adults with normal hearing (18-30 years) completed two experiments: 1) an EEG session measuring AMFR at two AM rates (8 vs 40 Hz), and 2) a behavioural measure estimating consonant-identification thresholds in noise in four phonetic conditions. For the EEG experiment, the stimuli were a 4-min long pure tone carrier at 1027 Hz sinusoidally modulated at either 8 Hz or 40 Hz (m=100%) presented twice. Adults were facing a screen displaying a silent cartoon and listened to the sounds played at around 65 dB SPL by two speakers located at each side of the screen. A fast Fourier transform (FFT) was performed on the averaged EEG waveforms in each AM rate condition. The maximum magnitude value at 8 and 40 Hz was estimated individually and corrected by the EEG noise within the modulation frequency neighbouring bin for each participant.
For the speech-in-noise adaptive task, syllables of the form /aCa/ were presented and the consonants /C/ were either fricative or stop. Four phonetic conditions were designed presenting a minimal change in a) place of articulation for the fricatives, b) place of articulation for the stops, c) manner of articulation for voiced consonants, and d) manner of articulation for unvoiced consonants. Syllables were presented in a XAB task to assess consonant identification thresholds within a steady speech-shaped noise.
AMFRs were observed at each modulation rate. Thresholds for consonant-identification in noise were comprised between -11 and -19 dB SNR for all phonetic conditions averaged. Preliminary correlation analyses between individual AMFR magnitudes (at 8 and 40 Hz) and condition-averaged consonant-in-noise thresholds showed no significant relationship. The role of other individual factors such as hearing levels remains to be explored.