Poster No:
2494
Submission Type:
Abstract Submission
Authors:
Minsun Park1, Ghootae Kim2, Chai-Youn Kim1
Institutions:
1Korea University, Seoul, Republic of Korea, 2Cognitive Science Research Group, Korea Brain Research Institute, Seoul, Republic of Korea
First Author:
Co-Author(s):
Ghootae Kim
Cognitive Science Research Group, Korea Brain Research Institute
Seoul, Republic of Korea
Introduction:
Recent neuroscientific findings challenge the traditional notion of the independence of unimodal sensory cortices (e.g., V1 for vision, A1 for audition) in multisensory integration. Studies, particularly on audio-visual (AV) interactions, demonstrated cross-modal processing, where auditory stimuli alone activated the visual cortex (Poirier et al., 2006), and various auditory information was decoded from the neural patterns in the visual cortex (Rezk et al., 2020; Vetter et al., 2020). Despite potential AV interaction through the visual cortex, a more direct assessment is still required, as sound and vision were not concurrently presented in these studies. In our study, we investigated whether integration of simultaneous AV signals modulate neural activity in the early visual areas. Our previous psychophysical works (Park et al., 2019; under review) utilizing the motion aftereffect (MAE; Mather, 1980) showed an AV direction congruence effect, i.e., enhanced visual MAE when visual adapting motion was accompanied by directionally congruent auditory motion compared to incongruent auditory motion. Using fMRI, we probed the neural basis of such congruence effect by testing whether AV direction congruence in V1 and V2 areas is decoded through multi-voxel pattern analysis (MVPA).
Methods:
Nineteen naïve individuals (24.09 ± 3.38 years) participated in an fMRI experiment (Siemens, 3T). Across 16 runs of the main experiment, adaptation and MAE phases were presented 8 times per run. During the adaptation phases (36 s initially and 12 s thereafter), the MAE was elicited by the 100%-coherence random-dot kinematograms (RDKs) moving leftward or rightward. Sound motions with the leftward or rightward direction were simulated by modulating white noise intensity presented between binaural channels of noise-canceling headphones. There were four sound conditions: congruent and incongruent conditions based on AV direction congruence, along with stationary and no-sound conditions. In the following 4-s MAE phase, participants reported the MAE duration and direction while viewing stationary RDKs. Using retinotopy and visual direction localizers, we identified a subset of motion-sensitive voxels in V1 and V2 (Figure 1a). Employing multi-class SVM, we performed MVPA to discriminate the four sound conditions for each adaptation and MAE phase. Whole-brain searchlight analysis was conducted to elucidate any regional specificity in patterns based on the sound conditions.
Results:
Behavioral results replicated our previous finding, indicating a longer MAE duration in the congruent condition compared to the incongruent condition (Figure 1b). Univariate fMRI results mirrored this congruence effect, showing greater signal changes in V1 and V2 during adaptation in the congruent condition (Figure 1c), whereas no significant differences in signal changes during MAE. Notably, AV direction congruence could be reliably classified from neural patterns in V1 and V2, even in the absence of physical motion (i.e., MAE phase) as well as in the adaptation phase (Figure 1d). Whole-brain searchlight results identified a large part of bilateral superior temporal area including A1, transverse temporal gyrus, and insula during both phases (Figure 2).
Conclusions:
Our results suggest neural mechanisms underlying the AV congruence effect observed by MAE duration, demonstrating distinct neural patterns in the early visual cortex depending on AV integration. Searchlight findings suggest that neural signatures in V1 and V2, linked to AV congruence, may originate from feedback influences from higher-order multisensory areas such as STS and insula. The present work adds to the understanding of neural mechanisms governing cross-modal interactions within the retinotopic areas, offering direct examination of AV integration.
Modeling and Analysis Methods:
Activation (eg. BOLD task-fMRI)
Multivariate Approaches 2
Perception, Attention and Motor Behavior:
Perception: Auditory/ Vestibular
Perception: Multisensory and Crossmodal 1
Perception: Visual
Keywords:
Cognition
Cortex
FUNCTIONAL MRI
Machine Learning
Multivariate
Perception
Univariate
Vision
Other - Multisensory integration; Auditory
1|2Indicates the priority used for review
Provide references using author date format
Mather, G. (1980). The movement aftereffect and a distribution-shift model for coding the direction of visual movement. Perception, 9, 379-392.
Park, M. Blake, R., & Kim, C. Y. (Under review). Audio-visual interactions outside of visual awareness during motion adaptation.
Park, M., Blake, R., Kim, Y., & Kim, C. Y. (2019). Congruent audio-visual stimulation during adaptation modulates the subsequently experienced visual motion aftereffect. Scientific Reports, 9(1), 1-11.
Poirier, C., Collignon, O., Scheiber, C., Renier, L., Vanlierde, A., Tranduy, D., Veraart, C., & de Volder, A. G. (2006). Auditory motion perception activates visual motion areas in early blind subjects. Neuroimage, 31, 279–285.
Rezk, M., Cattoir, S., Battal, C., Occelli, V., Mattioni, S., & Collignon, O. (2020). Shared representation of visual and auditory motion directions in the human middle-temporal cortex. Current Biology, 30(12), 2289-2299.
Vetter, P., Bola, Ł., Reich, L., Bennett, M., Muckli, L., & Amedi, A. (2020). Decoding natural sounds in early “visual” cortex of congenitally blind individuals. Current Biology, 30(15), 3039-3044.