14th Speech in Noise Workshop, 12-13 January 2023, Split, Croatia 14th Speech in Noise Workshop, 12-13 January 2023, Split, Croatia

P11Session 1 (Thursday 12 January 2023, 15:30-17:30)
Speech-on-speech perception in cochlear implant users

Eleanor E. Harding
Department of Otorhinolaryngology, University Medical Center Groningen, University of Groningen, Groningen, NL
Prince Claus Conservatory, Hanze University of Applied Sciences, Groningen, NL

Etienne Gaudrain, Barbara Tillmann
Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, Université Lyon 1, Université de Saint-Etienne, Lyon, FR

Bert Maat
Department of Otorhinolaryngology, University Medical Center Groningen, University of Groningen, Groningen, NL

Robert Harris
Prince Claus Conservatory, Hanze University of Applied Sciences, Groningen, NL

Rolien H. Free, Deniz Başkent
Department of Otorhinolaryngology, University Medical Center Groningen, University of Groningen, Groningen, NL

A cochlear implant (CI) provides electric hearing to deaf individuals. While this technology offers restored hearing of speech in quiet conditions, speech is often still hard for the CI user to understand in environments where multiple speakers are talking in the background. One contributing factor to reduced CI perception of speech-on-speech is that the fine structure of speakers’ voice characteristics, such as vocal tract length or fundamental frequency (F0), is lost due to the reduced spectrotemporal signal transmitted by the implant. This in turn may require that in order to perceive content, the intensity of the target speaker needs to be much greater compared to the masker, as voice characteristics cannot be utilized to distinguish simultaneous speakers. The current study is in the process of collecting speech-on-speech perception data from 24 CI users who have enrolled in a larger study. Our novel paradigm uses an adaptation of the coordinate response measure (CRM) where a target speaker says a call number and color while a gibberish masker is simultaneously presented, and the participant must identify the correct number and color on a response grid. The voice of the masker was modified such that it differed from the target voice in F0 and VTL according to three conditions [(∆F0,∆VTL): (0,0); (-6,+1.8); and (-12,+3.6); where the difference are expressed in semitones]. In addition, the target-to-masker (level) ratio (TMR) was modified according to three conditions as well (0 dB, +6 dB, +12 dB). This orthogonal design allows to estimate voice-difference benefits experienced by CI listeners at various TMRs. Results will contribute baseline performance data for CI users in terms of whether they can perceive target speech-on-speech with lower TMR when voice differences are introduced.

Last modified 2023-01-06 23:41:06