The Neural Representation of Sound Locations by Functional Connectivity

Poster No:

2487 

Submission Type:

Abstract Submission 

Authors:

Liwei Sun1, Renjie Tong1, Ying Liang1, Jing Wei1, Chunlin Li1, Xu Zhang1

Institutions:

1School of Biomedical Engineering, Capital Medical University, Beijing, China

First Author:

Liwei Sun  
School of Biomedical Engineering, Capital Medical University
Beijing, China

Co-Author(s):

Renjie Tong  
School of Biomedical Engineering, Capital Medical University
Beijing, China
Ying Liang  
School of Biomedical Engineering, Capital Medical University
Beijing, China
Jing Wei  
School of Biomedical Engineering, Capital Medical University
Beijing, China
Chunlin Li  
School of Biomedical Engineering, Capital Medical University
Beijing, China
Xu Zhang  
School of Biomedical Engineering, Capital Medical University
Beijing, China

Introduction:

The ability to localize sound sources rapidly allows human beings to understand the surrounding environment efficiently. Previous studies suggest that a dorsal "where" pathway on the cortex is functionally specialized for sound localization, which includes primary auditory cortex (PAC), planum temporale (PT), parietal regions, and prefrontal areas. However, it is still unclear how those regions within the "where" pathway interact with each other during sound localization. In the current study, we investigate how the multivariate functional connectivity (FC) patterns represent sound locations by collecting fMRI data from healthy participants during passive listening to sounds coming from different horizontal locations. Our result reveals a functional-connectivity-based system which encodes the lateralization angles of sound locations.

Methods:

fMRI data from twenty-seven healthy participants (14 females; 20–34 years old) were collected during passive listening. Five 1.2-s sound clips of white noise from different horizontal directions (-90°, -45°, 0°, 45°, 90°) at 1.2 m were recorded in an isolation booth using a dummy head with a microphone in each ear. A sparse-sampling fMRI design was introduced to avoid the interference from scanner noise. For each trial, the scanner started with a 2-s BOLD signal collection period, followed by a 2-s pause for sound replay. Different sounds were presented by block. Each block included five trials of a sound clip from the same location. fMRI data was preprocessed using fMRIPrep and normalized into MNI152 space. Regions of interest (ROIs) were defined in our previous study (Sun et al., 2023) and projected to the BN246 atlas to gain a more granular parcellation of the "where" pathway (20 ROIs). FC patterns among the 20 ROIs were calculated for each participant. A leave-one-subject-out cross-validation procedure was carried out to estimate the mean classification accuracy of sound locations. Within each fold, we adopted F test for feature selection and multinomial logistic regression for classification. We varied the number of FC features selected to find the model with best performance. In addition, we compared the actual confusion matrix with confusion matrices predicted by three representational models, namely full model, left vs. right model, and lateralization angle model. Furthermore, we did exploratory analysis of FC patterns based on the whole-brain network.

Results:

By feature selection, we found that as more FCs were included in the model, the decoding accuracy of sound locations initially increased, reached a peak at 29.63% (#FC = 15), and then gradually decreased. The best model with 15 FCs was used for following analysis. We calculated the Kendall's Tau correlation between the actual confusion matrix and three representational models. We found that the correlation between the observed confusion matrix and the lateralization angle model was significantly higher than the correlation between the confusion matrix and the left vs. right model (t = 4.564, p < 0.001) and marginally higher than the correlation between the confusion matrix and the full model (t = 2.043, p = 0.051). Furthermore, we found that sound locations could also be decoded from the whole-brain network with a peak accuracy of 32.6% (#FC = 35). The correlation between derived confusion matrix and the lateralization angle model was significantly higher than those between the confusion matrix and other two models (lateralization angle model > full model: t = 2.228, p = 0.035; lateralization angle model > left vs. right model: t = 4.601, p < 0.001).

Conclusions:

In the current study, we were able to decode sound locations from the FC patterns within the dorsal "where" pathway. Our result indicates that FC patterns encode sound locations by lateralization angle, which is different from the opponent hemifield coding model suggested by previous activation studies.

Modeling and Analysis Methods:

Connectivity (eg. functional, effective, structural)
fMRI Connectivity and Network Modeling
Multivariate Approaches

Novel Imaging Acquisition Methods:

BOLD fMRI 2

Perception, Attention and Motor Behavior:

Perception: Auditory/ Vestibular 1

Keywords:

FUNCTIONAL MRI
Hearing
Machine Learning
Multivariate
NORMAL HUMAN
Perception

1|2Indicates the priority used for review

Provide references using author date format

Sun, L., Li, C., Wang, S., Si, Q., Lin, M., Wang, N., Sun, J., Li, H., Liang, Y., Wei, J., Zhang, X., & Zhang, J. (2023), 'Left frontal eye field encodes sound locations during passive listening', Cerebral Cortex, vol. 33, no. 6, pp. 3067–3079