Poster No:
822
Submission Type:
Abstract Submission
Authors:
Yulei Shen1, Takahiko Koike2, Shohei Tsuchimoto3, Ayumi Yoshioka4, Kanae Ogasawara2, Norihiro Sadato5, Hiroki Tanabe1
Institutions:
1Department of Cognitive & Psychological Sciences, Graduate School of Informatics, Nagoya University, Nagoya, Aichi, 2Inter-Brain Dynamics Collaboration Unit, RIKEN Center for Brain Science, Tokyo, Tokyo, 3Division of Neural Dynamics, NIPS, Okazaki, Aichi, 4Section of Brain Function Information, National Institute for Physiological Sciences, Okazaki, Aichi, 5Ritsumeikan University, Kyoto, Kyoto
First Author:
Yulei Shen
Department of Cognitive & Psychological Sciences, Graduate School of Informatics, Nagoya University
Nagoya, Aichi
Co-Author(s):
Takahiko Koike
Inter-Brain Dynamics Collaboration Unit, RIKEN Center for Brain Science
Tokyo, Tokyo
Ayumi Yoshioka
Section of Brain Function Information, National Institute for Physiological Sciences
Okazaki, Aichi
Kanae Ogasawara
Inter-Brain Dynamics Collaboration Unit, RIKEN Center for Brain Science
Tokyo, Tokyo
Hiroki Tanabe
Department of Cognitive & Psychological Sciences, Graduate School of Informatics, Nagoya University
Nagoya, Aichi
Introduction:
We often transform the things we see into spoken information, and convert the spoken words we hear into vivid images in our minds. To investigate the neural mechanisms of this process, we conducted a hyperscan fMRI experiment. Unlike previous hyperscan fMRI studies[1,3], this study delved into a more nuanced aspect: the shared neural spatial representation of tangible information. While the information sender's perception of the target appears to be a direct representation, the receiver obtained information that has been transformed by the sender, thereby forming an indirect representation of the target. It also depends on the amount of information. We hypothesized that the neural representations of the receiver become more closely aligned with those of the sender when the transmitted content is large. To address this hypothesis, we computed the similarity of spatial patterns between interacting pairs.
Methods:
Forty-six subjects participated in the experiment. They were randomly assigned to same-sex dyads. We designed a novel "introduction-response" hyperscan task. In this task, a participant (sender) was asked to view a face picture and then to verbally introduce this picture to a partner (receiver) within 16s based on the hints; the receiver was asked to imagine the picture within 6s after hearing the description. Each participant was pseudo-randomly assigned to act as sender and receiver. To examine whether the effect of mental imagery would be further affected by the amount of information, the sender was given either two (low vividness) or five hints (high vividness) by the experimenter to describe a face picture.
MRI time-series data were acquired using two MRI scanners (Magnetom Verio 3T, Siemens) with standard 32-channel phased array coils. Functional images were acquired using T2*-weighted, gradient- EPI with the multiband sequence (TR 1,000 ms, voxel size 2 × 2 × 2mm3). Anatomical images were acquired using a T1-weighted MP-RAGE sequence (voxel size 0.8 × 0.8 × 0.8 mm3).
Image preprocessing was performed by FSL. To find the brain regions that showed high spatial pattern similarity between sender and receiver, the preprocessed whole-brain data were parcellated into 400 nodes using the Schaefer 400 parcels atlas[2]. We then calculated the average intensity for each voxel of the sender's time-series data during the speaking period, and those of the receiver's during the imagery periods. For each node, the all included voxel data from both sender and receiver were spatially aligned and Pearson's correlation was calculated in each trial per condition. Leave-one-trial-out iterations within each condition were employed for the validation of similarity matrices. These correlation coefficients were Fisher's z transformed. We then constructed similarity difference matrices between high and low vividness of face information by subtraction. Bootstrapping with 3000 iterations was used to generate a null distribution for these similarity difference matrices. We applied FDR correction for multiple comparisons.
Results:
When sharing more vivid face information, statistically significant higher pairwise spatial pattern similarity between speak phase of sender and imagery phase of receiver was found in the visual system (bilateral V1, V2 and V3, right FG), frontal-parietal cortex (Bilateral FP, IFG, SFG, MFG and preSMA; left precentral, SPL, IPS and IPL; right FOC), DMN (ACC and ParaCC), and right insular and hippocampus (Fig. 1, p < 0.05 post-FDR correction).

·Fig.1
Conclusions:
The present study has shown that, even in the absence of physical time point alignment, the spatial patterns of neural activity representing the same target information become increasingly similar between interacting pairs as the information becomes more specific in the real-time interaction. This similarity was reflected in a wide range of areas, including visual cortex, semantic processing network, visual imagery network, and the default mode network.
Emotion, Motivation and Social Neuroscience:
Social Interaction 1
Higher Cognitive Functions:
Higher Cognitive Functions Other 2
Keywords:
Social Interactions
Other - hypersan fMRI, inter-subject correlation, verbal communication
1|2Indicates the priority used for review
Provide references using author date format
Koike, Takahiko, et al. "Role of the right anterior insular cortex in joint attention-related identification with a partner." Social cognitive and affective neuroscience 14.10 (2019): 1131-1145.
Schaefer, Alexander, et al. "Local-global parcellation of the human cerebral cortex from intrinsic functional connectivity MRI." Cerebral cortex 28.9 (2018): 3095-3114.
Yoshioka, Ayumi, et al. "Neural substrates of shared visual experiences: a hyperscanning fMRI study." Social Cognitive and Affective Neuroscience 16.12 (2021): 1264-1275.