Poster No:
793
Submission Type:
Abstract Submission
Authors:
Marianne Reddan1, Desmond Ong2, Tor Wager3, Sonny Mattek4, Isabella Kahhale5, Jamil Zaki6
Institutions:
1Albert Einstein College of Medicine, The Bronx, NY, 2University of Texas at Austin, Austin, TX, 3Dartmouth College, Hanover, NH, 4University of Oregon, Eugene, OR, 5University of Pittsburgh, Pittsburgh, PA, 6Stanford University, Stanford, CA
First Author:
Co-Author(s):
Introduction:
Humans seamlessly transform dynamic social signals into inferences about the internal states of the people around them. The neural processes that lead to conscious inference are computationally complex and require integration across multiple sources of information such as an observer's internal homeostatic state, their past experiences, expectations, and social schemas [1,2]. Neuroimaging studies have implicated the medial prefrontal cortex, temporoparietal junction, and precuneus [3,4] in socioemotional inference; however, integrated models of how multiple brain regions interact to form an inference are lacking. Furthermore, prior research sometimes equates the signaller's intended emotion with the observer's inference. Here we seek to (1) disentangle the neural processes underlying the perception of signal intent from inference, and (2) test how concordance between these processes is related to inference accuracy.
Methods:
We collected fMRI data from participants (N = 100) as they watched 24 videos of social targets describing real-life emotional stories of both positive and negative valence. Participants rated the emotional intensity of people (targets) describing significant life events. Targets rated themselves on the same scale to indicate the intended "ground truth" emotional intensity of their videos. We used LASSO-PCR, a multivariate regression technique, to train two models of observer brain activity via leave-one-subject-out cross-validation (Fig 1). 8 video trials comprised the training set, while the remaining 12 trials comprised the held-out validation set. Videos were selected from the Stanford Emotional Narratives Dataset (SENDv1) [5].
After we identified these two unique components that underlie socioemotional processing, we sought to test how they interact in relation to an individual person's empathic accuracy. To do this, we applied the "ground truth" and inference models to participant-level brain activity when participants made inaccurate inferences (low empathic accuracy) and accurate inferences (high empathic accuracy; Fig 2A). Then we correlated the predictions of the two models across all participants.

·Figure 1.
Results:
Both the "ground truth" emotional state of the target (r = 0.50, 5,000 bootstrap samples, P < 0.0001), and an observer's inference about the target (r = 0.53, 5,000 bootstrap samples, P < 0.0001) could be predicted from the observer's brain activity. These neural patterns predictive of "ground truth" and inference are dissociable (cosine similarity = 0.29).The alignment between the two models was significantly greater during high accuracy performance than low accuracy performance (z = -2.71, P = 0.003; Fig 2B). We verified this effect in the validation trials (low empathic accuracy alignment r = 0.58, P < 0.0001; high empathic accuracy alignment r = 0.79, P < 0.0001; two-tailed z-test of the correlation difference z = 2.83, P = 0.005; Fig 2C).

·Figure 2.
Conclusions:
Using naturalistic socioemotional stimuli and machine learning, we developed reliable brain signatures that predict what an observer thinks about a target, what the target thinks about themselves, and investigated the correspondence between them. Interestingly, we found that greater concordance between the latent representation of the target's intended signal intensity and the observer's inference indicates greater empathic accuracy. These results give insight into the neural processes that transform social signals into conscious inference during social interactions. Future work will apply these signatures in clinical data to better our understanding of socioemotional dysfunction in people with mood disorders.
Emotion, Motivation and Social Neuroscience:
Emotional Perception 2
Social Cognition 1
Modeling and Analysis Methods:
Activation (eg. BOLD task-fMRI)
Classification and Predictive Modeling
Multivariate Approaches
Keywords:
ADULTS
Cognition
Emotions
FUNCTIONAL MRI
Machine Learning
Multivariate
1|2Indicates the priority used for review
Provide references using author date format
Chang, L. J., Jolly, E., Cheong, J. H., Rapuano, K. M., Greenstein, N., Chen, P.-H. A., & Manning, J. R. (2021). Endogenous variation in ventromedial prefrontal cortex state dynamics during naturalistic viewing reflects affective experience. Science Advances, 7(17), eabf7129. https://doi.org/10.1126/sciadv.abf7129
Pugh, Z. H., Choo, S., Leshin, J. C., Lindquist, K. A., & Nam, C. S. (2022). Emotion depends on context, culture and their interaction: Evidence from effective connectivity. Social Cognitive and Affective Neuroscience, 17(2), 206–217. https://doi.org/10.1093/scan/nsab092
Zaki, J., Weber, J., Bolger, N., & Ochsner, K. (2009). The neural bases of empathic accuracy. Proceedings of the National Academy of Sciences, 106(27), 11382–11387. https://doi.org/10.1073/pnas.0902666106
Skerry, A. E., & Saxe, R. (2014). A Common Neural Code for Perceived and Inferred Emotion. The Journal of Neuroscience, 34(48), 15997–16008. https://doi.org/10.1523/JNEUROSCI.1676-14.2014
Ong, D. C., Wu, Z., Zhi-Xuan, T.,Reddan, M., Kahhale, I., Mattek, A., & Zaki, J, (2021),
‘Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset,’ IEEE Transactions on Affective Computing, vol. 12, no. 3, pp. 579-594