P31Session 1 (Thursday 12 January 2023, 15:30-17:30)Machine learning framework predicts audio settings in real-world environments
Patient-centric adaptation of audiological preferences across different contexts is a challenging task, as traditional clinical measurements of audibility do not reflect the cognitive perception of speech nor the binaural loudness of sounds in different contexts. Listening programs enable hearing aid users to adapt device settings for specific listening situations, increasing the personalization of their listening experience.
This study aims to investigate whether the selection of a specific listening program can be predicted based on the sound exposure of a user. Thus, we defined a two-step time-series classification framework. First, a binary classifier predicts whether a program change is applied. Second, a multinomial classifier predicts which of the four distinct available programs ("Natural", "Detail", "Clarity", "Full") is selected based on real-world, time-series sound environment data.
The final dataset is comprised of nine sound environment features and approx. 3.500 program selections from 28 distinct users. A state-of-the-art feature extraction model, MiniRocket, is used to transform the environment features for the classification task. Initial results show the best-performing classifier is able to correctly label 85% of the “Detail” selections, 89% of the “Full”, 88% of the “Natural”, and 85% of the “Clarity”, based on F-score.
By rethinking contextual adaptation of HA settings as a time-series classification problem, we validate the role of the sound environment in program selection. Additionally, we establish a baseline for investigating the role of listening intents, as well as the application of privacy-aware machine learning techniques to support data privacy.