14th Speech in Noise Workshop, 12-13 January 2023, Split, Croatia 14th Speech in Noise Workshop, 12-13 January 2023, Split, Croatia

P14Session 2 (Friday 13 January 2023, 09:00-11:00)
Principal Components Analysis of amplitude envelopes from spectral channels: comparison between music and speech

Agnieszka Duniec, Olivier Crouzet, Elisabeth Delais-Roussarie
Laboratoire de Linguistique de Nantes – LLING / UMR6310 CNRS, Nantes Université, France

Introduction: The efficient coding hypothesis predicts that perceptual systems are optimally adapted to natural signal statistics. According to this approach, sensory systems evolved to encode environmental signals in order to represent the greatest amount of information at the lowest possible resource cost. Previous studies applied Factor Analysis (FA) on amplitude modulations channels from natural speech signals. While some authors argued that 4 channels would be sufficient to represent the main contrastive segmental information in natural clean speech, comparison of speech statistics with perceptual performance led to suggest that 6 to 7 frequency bands would be required to optimally represent vocoded speech. However, research on music perception in cochlear implanted listeners sheds light on potential limits associated with this hypothesis. Indeed, performance observed on vocoded signal material in normal-hearing listeners as well as in cochlear implant users is systematically better for speech signals than for music. It is therefore crucial to compare statistical properties of music and speech in order to reach a better understanding of the relation between characteristics of various auditory communication signals and their possible optimal coding in auditory perception. We applied the same FA method on 2 different sets of data: (1) a database of free music recordings (Free Music Archive, https://github.com/mdeff/fma), (2) a free corpus of speech signals (Clarity Speech, https://doi.org/10.17866/rd.salford.16918180).

Method: Analyses were carried out using the Matlab environment and mirrored previous studies. Sample signals were passed through a gammatone filterbank (1/4th ERB bandwidth, approx. 100-120 channels) and their energy envelope was extracted. This amplitude modulation matrix was then run through FA and Principal Components (PCs) were independently rotated. Channels that covary in amplitude envelope should be grouped as a single Principal Component. As our aim was to compare speech and music, for which typical signal bandwidths differ, two higher-frequency limits were compared (8000 Hz vs. 22000 Hz).

Results: Contrastive analyses of music and speech data are still in progress. Preliminary results show that 22 PCs (the maximum number of PCs described in previous studies) account for 86% of the variance when processing music. Focusing on a reduced number of PC combinations that would compare to previous conclusions on speech according to which 4 to 7 PCs would be 'optimal', we find that cumulative explained variance for music is located between 35% and 50%. Statistical details in the relevant papers are not complete enough to let us compare our results on music with previous results on speech. Analysing the Clarity Speech database will provide the basis for an effective comparison. A full breakdown of the same measurements for each database will be detailed in the final presentation.

Acknowledgements: Agnieszka Duniec receives PhD funding (2019--2023) from the RFI-Ouest Industries Créatives (RFI-OIC, Région Pays de la Loire) & Nantes Université.

Last modified 2023-01-06 23:41:06