ePoster

Decoding of selective attention to speech in CI patients using linear and non-linear methods

Constantin Jehn, Adrian Kossmann, Anja Hahne, Niki Vavatzanidis, Tobias Reichenbach
FENS Forum 2024(2024)
Messe Wien Exhibition & Congress Center, Vienna, Austria

Conference

FENS Forum 2024

Messe Wien Exhibition & Congress Center, Vienna, Austria

Resources

Authors & Affiliations

Constantin Jehn, Adrian Kossmann, Anja Hahne, Niki Vavatzanidis, Tobias Reichenbach

Abstract

Aims: Recent research showed that selective attention to speech can be decoded from non-invasive EEG recordings [1]. This effect could potentially be applied in neuro-steered CIs, which could aid the wearer in noisy environments. Here we examine linear and non-linear decoding methods for selective attention, specifically for bimodal CI users, in a competing speaker scenario.Methods: EEG data was collected from 23 bimodal CI patients exposed to two competing speech stimuli emanating from spatially separated speakers. Patients were directed to focus on one speech stream during a given segment. Decoding of selective attention was achieved through a regularized linear backward model and a convolutional neural network (CNN). We further refined the decoding process by employing a learnable support-vector-machine (SVM) to classify samples based on correlation scores.Results: The focus of attention could be decoded successfully based on the linear backward model. Moreover, we found that the CNN achieves yet higher decoding accuracies of up to 71%, mirroring outcomes observed in studies involving healthy participants [2].Conclusions: In conclusion, our research confirms the supremacy of non-linear methods for decoding selective attention in bimodal cochlear implant users. However, challenges such as moderate mean decoding accuracies and substantial variability among participants remain to be addressed. References: [1]O'Sullivan, James A., et al. (2015) Attentional selection in a cocktail party environment can be decoded from single-trial EEG, Cerebr. Cort. 25: 1697-1706. [2]Thornton M., Mandic D., Reichenbach T. (2022) Robust decoding of the speech envelope from EEG recordings through deep neural networks, J Neural Eng. 19:046007.

Unique ID: fens-24/decoding-selective-attention-speech-62f56cc2