Neural mechanisms of insight during narrative comprehension

Poster No:

1084 

Submission Type:

Abstract Submission 

Authors:

Hayoung Song1, Jin Ke1, Yuan Chang Leong2, Monica Rosenberg1

Institutions:

1University of Chicago, Chicago, IL, 2The University of Chicago, Chicago, IL

First Author:

Hayoung Song  
University of Chicago
Chicago, IL

Co-Author(s):

Jin Ke  
University of Chicago
Chicago, IL
Yuan Chang Leong  
The University of Chicago
Chicago, IL
Monica Rosenberg  
University of Chicago
Chicago, IL

Introduction:

How do we experience insight, or a feeling of "aha"? Behavioral evidence suggests that we experience insight as we comprehend causal structures of events [1]. A block-design study showed that insight leads to changes in representation patterns in the hippocampus and the medial prefrontal cortex (mPFC) [2]. Despite initial evidence, however, the cognitive and neural mechanisms of insight during naturalistic and continuous narrative comprehension remain to be studied. This study introduces a novel experimental design to ask how memory retrieval and causal reasoning guide insight during an unfolding narrative.

Methods:

We collected fMRI data as human participants (N=36) watched an episode of a TV show, This is Us (41m 40s). To elicit multiple "aha" moments, the episode was segmented into 48 events (each ~50 secs) and scrambled in temporal order. Scrambled events were grouped into 10 fMRI runs (~5 min each). Participants were randomly assigned to three scrambled-order groups, such that one-third of the participants watched the episode in the same scrambled order (Fig 1A). Participants were instructed to press an "aha" button whenever they understood something new about the show's events and characters. After watching a set of events, participants were shown screenshots taken whenever they pressed the aha button and were asked to verbally explain their insight at these moments (Fig 1B).
Supporting Image: ohbm_song_figure1.png
 

Results:

The frequency of aha button presses varied across participants, ranging from roughly 4 presses per minute to one press every 3 minutes. Despite such variability, aha button presses were synchronous across participants (dice coefficient compared to chance distribution; z = 37.0, p < 0.001). Coding of verbal responses found that 55.05% (SD 23.52%) of participants' explanations of aha moments mentioned past events, which suggests that insight occurs by retrieving causally related past events in memory.

​​To identify brain areas that represent the causal structure of events, we correlated an event-by-event causal relationship matrix with an event-by-event voxel pattern similarity matrix that was extracted from each of the 100 cortical parcels [3] (Fig 2B). The causal relationship matrix was created based on participants' verbal responses; if a past event was recalled at an aha moment, a causal relationship between the pair of events was scored. Voxel activity patterns in the mPFC, retrosplenial cortex, and early visual cortices represented causally related events to be similar to one another and causally unrelated events as dissimilar, when controlling for semantic similarity between events (Fig 2A).

Next, we hypothesized that activity patterns in these brain regions would shift at aha moments due to a change in event representation. To test this, we applied a hidden Markov model (HMM) [4] on the voxel activity pattern time series of each parcel. Sudden shifts in representation patterns were observed ~2s prior to aha button presses. The effect was observed throughout the cortex, including bilateral mPFC, where the likelihood of the HMM boundaries significantly increased ~2s prior to button presses (Fig 2C, D). Furthermore, cortico-hippocampal cofluctuation time series [5] showed that representation pattern shifts were accompanied by a transient decoupling between the hippocampus and mPFC [6], again at ~2s prior to button presses (Fig 2E, F). This indicates that moments of insight are characterized by a transient change of event representation in the mPFC, in coordination with the hippocampus.
Supporting Image: ohbm_song_figure2.png
 

Conclusions:

Insight occurs by retrieving causally related past events in memory. The mPFC represents causal event structures, and shows a shift in representation patterns at moments of insight with dynamic interaction with the hippocampus. The study demonstrates that insight during narrative comprehension involves memory retrieval and causal reasoning as well as dynamic reconfiguration and representational changes in the hippocampal-default mode network circuit.

Higher Cognitive Functions:

Reasoning and Problem Solving 2

Learning and Memory:

Long-Term Memory (Episodic and Semantic) 1

Keywords:

Cognition
FUNCTIONAL MRI
Learning
Memory

1|2Indicates the priority used for review

Provide references using author date format

[1] Song, H., Park, B. -Y., Park, H., Shim, W. M. (2021). 'Cognitive and neural state dynamics of narrative comprehension', Journal of Neuroscience, 41 (43), 8972-8990.
[2] Milivojevic, B., Vincente-Grabovetsky, A., Doeller, C. F. (2015). 'Insight reconfigures hippocampal-prefrontal memories', Current Biolology, 25 (7), 821-830.
[3] Schaefer, A., Kong, R., Gordon, E. M. et al. (2018). 'Local-global parcellation of the human cerebral cortex from intrinsic functional connectivity MRI', Cerebral Cortex, 28 (9), 3095-3114.
[4] Baldassano, C., Chen, J., Zadbood, A., Pillow, J. W., Hasson, U., Norman, K. A. (2017). 'Discovering event structure in continuous narrative perception and memory', Neuron, 95 (3), 709-721.
[5] Zamani Esfahlani, F., Jo, Y., Faskowitz, J., Betzel, R. F. (2020). 'High-amplitude cofluctuations in cortical activity drive functional connectivity', The Proceedings of the National Academy of Sciences, 117 (45), 28393-28401.
[6] Van Kesteren, M. T. R., Ruiter, D. J., Fernandez, G., Henson, R. N. (2012) 'How schema and novelty augment memory formation', Trends in Neuroscience, 35 (4), 211-9.
[7] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I. (2019). 'Language models are unsupervised multitask learners', OpenAI Blog.