Poster No:
708
Submission Type:
Abstract Submission
Authors:
Hyeonjung Kim1, Jongwan Kim2
Institutions:
1Jeonbuk National University, Jeonju-si, Jeollabuk-do, 2Jeonbuk National University, Jeon-su si, Jeollabuk-do
First Author:
Hyeonjung Kim
Jeonbuk National University
Jeonju-si, Jeollabuk-do
Co-Author:
Jongwan Kim
Jeonbuk National University
Jeon-su si, Jeollabuk-do
Introduction:
The valence is one of core affect dimensions, describing negative and positive feelings along a bipolar dimension (Russell, 1980). The debate surrounding valence pertains to whether affective representation is consistent or specific across different sensory modality (Barrett & Bliss-Moreau, 2009). The debate has led to two hypotheses, modality-general (consistent representations across modalities) and modality-specific (unique representations for each of modalities).
Our study aimed to investigate the brain regions supporting the modality-general hypothesis by using a recall paradigm. Although recall often evoked similar emotions to prior experiences (Tulving, 2002), it does not always rely on external stimuli. Thus, we aimed to confirm whether valence representations were consistent across modalities.
Methods:
2.1. Data and experimental design
In this study, we re-analyzed two shared datasets. Chen et al. (2017) collected fMRI data from 17 participants while they watched the first episode of BBC Sherlock and then later orally recalled it during fMRI measurement. The episode was divided into 48 scenes for both the watching and recall sessions. In Kim et al. (2020), the same stimuli were divided into 621 segments, with 125 participants rating affective responses on a 9×9 grid with valence (X-axis) and arousal (Y-axis) dimensions.
We reorganized these two datasets for this study. We excluded fMRI data of both watching and recall sessions if the scene was not successfully recalled. The valence rating of a specific scene was calculated by averaging the valence ratings of multiple segments within that scene.
2.2 Searchlight analysis
To explore the regions showing modality-general representation, we conducted the searchlight analysis using 5×5×5 cubic neighboring voxels with cross-participant cross-modal regression-based decoding (Fig 1). The process involved assigning 16 participants to the training set and left-one participant to the testing set, with these sets representing different modality conditions. Multiple regression was then conducted with the training set, using voxel data and valence ratings. We multiplied each regression coefficient by each voxel of the testing set, yielding predicted valence ratings associated with scenes. The Pearson correlation between predicted valence ratings and participant valence ratings was used as the prediction accuracy. This procedure was repeated by swapping the modality condition of testing and training sets, and the two modality maps were averaged to form the consistency brain map. This procedure was repeated 17 times to assign each participant in the testing set once. The brain maps were used for a one-sample t-test (uncorrected α=.001), using statistical parametric mapping 12 (SPM12). To test significance, 1,000 permutations were conducted (α=.05).

·Figure 1. Illustration of the searchlight procedure
Results:
We only used the fMRI data for scenes which participants successfully recalled, with each participant recalling a different number of scenes (M=34, range=24 to 46), and valence ratings for scenes were unbiased toward one direction of valence (average valence ratings' range=-2.98 to 1.08).
To identify the regions representing consistent valence representation across modalities, we conducted a searchlight analysis with permutation test. The result revealed the right middle temporal gyrus (MTG), right inferior temporal gyrus (ITG), and left ITG/fusiform gyrus (ps<.05, cluster size>174, Figure 2).

·Figure 2. Illustration of the results of searchlight analysis
Conclusions:
This study, considering the consistency across people, aimed to confirm whether affective representations in a watching and recall were consistent. The result revealed modality-general representation in three brain regions, known to be associated with high-level visual processing (e.g. face processing), recall, and emotion. Particularly, ITG engages in recalling visual elements. These brain regions showed consistent affective representations across watching and recall, providing support for the modality-general hypothesis of emotion.
Emotion, Motivation and Social Neuroscience:
Emotional Perception
Emotion and Motivation Other 1
Modeling and Analysis Methods:
Multivariate Approaches 2
Keywords:
Data analysis
Emotions
FUNCTIONAL MRI
Open Data
Other - recall
1|2Indicates the priority used for review
Provide references using author date format
Barrett, L. F. (2009). 'Affect as a Psychological Primitive'. Advances in experimental social psychology, 41, 167-218.
Chen, J. (2017). 'Shared memories reveal shared structure in neural activity across individuals'. Nature neuroscience, 20(1), 115-125.
Kim, J. (2020). 'A study in affect: Predicting valence from fMRI data'. Neuropsychologia, 143, 107473.
Russell, J. A. (1980). 'A circumplex model of affect'. Journal of Personality and Social Psychology, 39(6), 1161-1178.
Tulving. (2002). 'Episodic Memory: From Mind to Brain'. Annual review of psychology, 53(1), 1-25.