14th Speech in Noise Workshop, 12-13 January 2023, Split, Croatia 14th Speech in Noise Workshop, 12-13 January 2023, Split, Croatia

P26Session 2 (Friday 13 January 2023, 09:00-11:00)
Perceptual learning of modulation-filtered speech

James Webb, Ediz Sohoglu
University of Sussex, Brighton, UK

Listeners have a remarkable ability to adapt to degraded speech, a process known as perceptual learning. Previous work has demonstrated that learning generalises beyond the words heard during training (Hervais-Adelman et al., 2008, doi:10.1037/0096-1523.34.2.460). This might suggest that learning changes how listeners interpret acoustic features that are shared across words. However, precisely which acoustic representations are modified remain unknown.

Accumulating evidence suggests that the auditory cortex is highly tuned to spectral and temporal modulations in speech (Chi et al., 2005, doi:10.1121/1.1945807; Elliott and Theunissen, 2009, doi:10.1371/journal.pcbi.1000302). In the present study we capitalised on this finding to delineate the perceptual representations that are changed by learning. Across two experiments (conducted online; N=150), listeners were trained and tested with speech filtered to contain non-overlapping modulations. While listeners’ comprehension accuracy improved two-fold for trained speech, learning failed to generalise to speech differing in modulation content. Such specificity of learning is consistent with the hypothesis that perceptual learning of degraded speech occurs at a level of processing in which representations are acoustic-based (Hervais-Adelman et al., 2011, doi:10.1037/a0020772). It additionally suggests that these acoustic representations are primarily organised in terms of spectral and temporal modulations.

Last modified 2023-01-06 23:41:06