Poster No:
948
Submission Type:
Abstract Submission
Authors:
Thomas Botch1, Hayoung Song2, Tamara Vanderwal3, Monica Rosenberg2, Emily Finn1
Institutions:
1Dartmouth College, Hanover, NH, 2University of Chicago, Chicago, IL, 3Department of Psychiatry, University of British Columbia, Vancouver, BC
First Author:
Co-Author(s):
Tamara Vanderwal
Department of Psychiatry, University of British Columbia
Vancouver, BC
Introduction:
Everyday experience is composed of rich, multimodal sensory information. The brain leverages certain features of incoming information to make meaning amidst this complexity. One analytic approach for capturing brain responses to incoming information is intersubject correlation (ISC), in which brain activity to a time-locked stimulus is correlated across people to isolate stimulus-driven responses [1,2]. Yet, while ISC can tell us how much brain activity is driven by a stimulus, it cannot tell us which specific features drive this activity. Prior work has related ISC to known stimulus features [1,3,4], essentially treating a naturalistic stimulus as equivalent to the sum of its parts. Here, we investigate the "dark matter" of ISC, or the shared signal remaining after modeling known features.
Methods:
We used a published fMRI dataset [5] where subjects (N=43) watched four audiovisual movies (range: 7:27-12:27 min). We also used a subset of the Narratives dataset [6] with N=45 subjects that listened to four auditory stories (range: 6:40-13:57 min).
First, we calculated "baseline" ISC using a voxel-wise, leave-one-subject-out approach. We identified voxels showing significant ISC using null distributions based on 1000 random time shifts.
Then, across both datasets, we extracted 23 stimulus features (Fig. 1a) spanning auditory (e.g., loudness, speech), visual (e.g., luminance, faces), and language (e.g., concreteness) modalities. We also included the first-derivative of each feature to track both presence and changes in each feature. This resulted in 42 features for the audiovisual movies (auditory & visual features) and 18 features for the auditory stories (auditory & language features). We then used a general linear model (GLM) to model stimulus features in each subject's timeseries. Here, we also extracted the residual timeseries (the unmodeled signal) for each subject.
Using the outputs of the GLM, we evaluated group-level univariate responses to each feature using a one-sample t-test (q<0.001) and calculated ISC over the residual timeseries ("residual" ISC). Lastly, we contrasted baseline and residual ISC values to assess the extent to which the regression removed variance in brain activity attributable to the stimulus.
Results:
As expected, sensory features (e.g., loudness, luminance) elicited consistent responses within their corresponding sensory cortices: luminance in primary visual cortex and loudness in primary auditory cortex (Fig 1b/c; q<0.001). Interestingly, in the audiovisual movies, sensory features also drove responses in unrelated unimodal (e.g., luminance within auditory cortex) and association cortex. This suggests that the multimodal nature of the stimulus alters how and where features are represented.
To address our primary question, we compared stimulus-driven signal (as indexed by ISC) before and after modeling known features. At baseline (before regression), there was widespread ISC (p<0.05) within both datasets. ISC was highest in primary sensory regions; specifically, auditory/visual cortices for audiovisual movies (Fig. 2a) and auditory regions for Narratives (Fig. 2b). Although the modeled features captured some explainable variance in brain activity, causing ISC to decrease following regression, significant stimulus-driven signal remained across cortex. In fact, our model explained only a limited portion of the ISC signal – on average 10.1% (max 45.3%) in the Narratives dataset and 20% (max 61%) in the audiovisual movies – suggesting this shared signal may relate to unknown (or at least unmodeled) dimensions important for cognitive processing.
Conclusions:
Findings indicate that the brain represents naturalistic stimuli as more than the sum of individual features. Although some stimulus-driven signal was removed by modeling 23 known features, the majority of this signal persisted. We suggest that there are potentially unknown/emergent features driving neural responses to naturalistic stimuli.
Higher Cognitive Functions:
Higher Cognitive Functions Other 1
Modeling and Analysis Methods:
Activation (eg. BOLD task-fMRI) 2
Univariate Modeling
Perception, Attention and Motor Behavior:
Perception and Attention Other
Keywords:
Cognition
Computational Neuroscience
Cortex
Data analysis
FUNCTIONAL MRI
Meta- Analysis
Open Data
Univariate
Other - Naturalistic
1|2Indicates the priority used for review
Provide references using author date format
[1] Hasson, U. et al., (2004). Intersubject Synchronization of Cortical Activity During Natural Vision. Science, 303(5664), 1634–1640.
[2] Nastase, S. A., Gazzola, V., Hasson, U., & Keysers, C. (2019). Measuring shared responses across subjects using intersubject correlation. Social Cognitive and Affective Neuroscience, 14(6), 667–685.
[3] Pajula, J., Kauppi, J.-P., & Tohka, J. (2012). Inter-Subject Correlation in fMRI: Method Validation against Stimulus-Model Based Analysis. PLoS ONE, 8(8), e41196.
[4] Hasson, U., Malach, R., & Heeger, D. J. (2010). Reliability of cortical activity during natural stimulation. Trends in Cognitive Sciences, 14(1), 40–48.
[5] Sava-Segal, C., Richards, C., Leung, M., & Finn, E. S. (2023). Individual differences in neural event segmentation of continuous experiences. Cerebral Cortex, 33(13), 8164–8178.
[6] Nastase, S. A. et al. (2021), The “Narratives” fMRI dataset for evaluating models of naturalistic language comprehension. Sci. Data 8, 250.