Decoding Newly Learned Sound Categories from Neuroimaging Signals: Towards an Optimal Pipeline

Poster No:

1668 

Submission Type:

Abstract Submission 

Authors:

Naishi Feng1, Gangyi Feng1

Institutions:

1The Chinese University of Hong Kong, Hong Kong SAR, China

First Author:

Naishi Feng  
The Chinese University of Hong Kong
Hong Kong SAR, China

Co-Author:

Gangyi Feng  
The Chinese University of Hong Kong
Hong Kong SAR, China

Introduction:

Machine learning (ML) techniques have been successfully applied to neuroimaging data for decoding a diverse set of mental information. Most of the stimuli to be decoded are well-learned and familiar, such as common object categories[1], native language[2], and speech[3]. However, it is often challenging to decode unfamiliar or newly learned knowledge from noisy neuroimaging signals. This decoding task becomes even more challenging when decoding novel auditory categories with only brief training and exposure to the exemplars. The temporal fleet and multidimensional properties of acoustic signals also pose a significant challenge for machine learning algorithms to learn complex patterns. Therefore, a new signal processing and machine learning pipeline is required to overcome these challenges and increase neural decodability, which could also provide insights into how learning shapes neural responses. Here, we used Magnetoencephalography (MEG) to record neural signals of novel artificial sounds when participants learned to categorize these sounds into categories. We applied various ML algorithms to decode the newly learned sound categories to determine an optimal algorithm and analysis pipeline.

Methods:

Thirty-five healthy right-handed participants were recruited for the auditory category learning study. Participants listened to the sounds and learned to categorize them into two categories based on feedback in a 40-minute session. The MEG signals were recorded by an Elekta Neuromag machine. We selected three types of machine learning algorithms: linear discriminant analysis (LDA), support vector machine (SVM), and artificial neural networks (ANN) to test their neural decoding performances. For SVM, we used both linear and nonlinear SVM. ANN includes a three-layer fully connected neural network (FNN) and a shallow ConvNet (SC)[4]. These algorithms are chosen because of their ability to extract useful information from neural signals and effectively learn complex patterns with relatively small training samples. ANN algorithms may require more training data to reveal their decoding power. FNN has two layers that can learn and store mapping relations between neural data and targets, whereas SC has two convoluted layers capable of capturing information abstracting from the raw data. In addition to the overall decoding performances, we evaluated the decoding abilities of these algorithms based on their performance for successful (top 15) and less successful (bottom 15) learners, different training blocks, and across different brain locations and time windows of sound presentation.

Results:

Among all the algorithms tested, the linear-SVM combined with PCA dimension reduction and SC was found to be more effective in decoding newly learned categories than others when using all the available data. Neural decoding accuracies gradually increased as the training blocks progressed for all the methods, which is similar to the patterns of behavioral learning. The decoding performances of the linear classifiers (LDA and linear-SVM) can achieve better performances than others when applied to subgroups of the learners (i.e., successful and less successful learners). We further demonstrated that the best performance in decoding sound categories across the time and location of the neuroimaging signals is mainly at the time windows of 200-300 and 400-500 ms after sound presentation and located at the bilateral temporal and parietal channels.
Supporting Image: Figure1.jpg
Supporting Image: Figure2.jpg
 

Conclusions:

Although decoding newly learned knowledge is a challenging task, we demonstrate that the linear-SVM with dimension reduction procedure and SC algorithms could be a promising technical solution to uncover the hidden neural signals related to the new knowledge with brief learning experience and exposure. This pipeline could potentially capture the useful and core temporal and spatial neural information for decoding and provide novel insights into the neural mechanisms of auditory learning.

Modeling and Analysis Methods:

Classification and Predictive Modeling 2
EEG/MEG Modeling and Analysis 1

Novel Imaging Acquisition Methods:

MEG

Keywords:

MEG
Other - Neural Decoding; Category Learning; Machine Learning; Short-term Training

1|2Indicates the priority used for review

Provide references using author date format

[1] Liu, C. (2022), ‘SincNet-Based Hybrid Neural Network for Motor Imagery EEG Decoding’, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 540–549. Available at: https://doi.org/10.1109/TNSRE.2022.3156076.
[2] Lin, Y. (2022), ‘Neural decoding of speech with semantic-based classification’, Cortex, vol. 154, pp. 231–240. Available at: https://doi.org/10.1016/j.cortex.2022.05.018.
[3] Cooney, C. (2020), ‘Evaluation of hyperparameter optimization in machine and deep learning methods for decoding imagined speech eeg’, Sensors (Switzerland), vol. 20, no. 16, pp. 1–22. Available at: https://doi.org/10.3390/s20164629.
[4] Schirrmeister, R.T. (2017), ‘Deep learning with convolutional neural networks for EEG decoding and visualization’, Human Brain Mapping, vol. 38, no.11, pp. 5391–5420. Available at: https://doi.org/10.1002/hbm.23730.