Poster No:
918
Submission Type:
Abstract Submission
Authors:
Logan Dowdle1, Luca Vizioli1, Essa Yacoub1, Geoff Ghose1
Institutions:
1Center for Magnetic Resonance Research, Minneapolis, MN
First Author:
Co-Author(s):
Introduction:
Human faces are a remarkable visual stimulus, in that subtle changes in perceived features supports a powerful information classification system. For example, social categories, such emotive state, are of obvious and immediate behavioral relevance but characterized by modest changes in visual input. Previous research has identified a clear network of brain regions that favor faces relative to other stimulus categories (Haxby et al., 2001; Kanwisher and Yovel, 2006), however, how faces are actually processed to extract social meaning remains unknown. Previous work from our group suggests that the use of face-stimulus relevant tasks, opposed to somewhat generic tests of memory such as N-back, may be necessary to characterize these complex networks (Dowdle et al., 2021). Here we extend these previous findings under a social perception framework, and acquire neuroimaging data at fine spatial and fast temporal scales.
Methods:
Nine participants completed a non-social color task and two social tasks: perception of gender and perception of expression. Across two separate visits we obtained separate high spatial (fine session; 0.8mm isotropic) and high temporal (fast session; 0.5s) resolution 7 Tesla BOLD images. Participants viewed partially degraded faces (1s on, 2 to 6s ISI) and made 2AFC choices (male/female; happy/neutral, blue/red border). Stimuli and timing were identical across tasks. NORDIC denoising (Vizioli et al., 2021) was applied to maximize SNR. Responses to each stimulus class and task-specific hemodynamic responses were estimated using a finite impulse response (FIR) model. Activations were examined under the typical univariate framework as well as a cross-validated multivariate pattern analysis (MVPA) searchlight approach to determine cortical areas that contain information sufficient to distinguish the tasks. This searchlight was employed across the entire cortical surface for the fast session (averaging over depths), as well as for 3 unique cortical depths within the fine spatial session. Typical face regions of interest (ROIs) were derived using a separate face localizer task, completed for both sessions. (Stigliani et al., 2015).
Results:
In univariate analyses (Fig 1A), the non-social task showed significantly weaker responses (p<0.05, corrected) in the fusiform face and occipital face areas (FFA; OFA) compared to social tasks (Fig 1B). However, no significant differences were found between the two social perception tasks (Fig 1C). For the whole-brain searchlight, we reproduced univariate findings and were able to decode non-social versus social in typical face ROIs. In addition, we also observe significant (pFWE<0.05, permutation corrected) decoding between the 2 social tasks, notably outside of typical ROIs, spread across the cortical surface (Fig 2). Zooming in to the ventral temporal cortex, we find that decoding between the social tasks differs across cortical depths – with evidence that in the anterior temporal cortex the outer depths contain more information to separate social tasks, while for the posterior regions, this decoding is more successful in the inner depths.

·Fig 1. A) Similar responses in Tasks, B) Differences between social vs. non social. C) No sign. differences observed between 2 social tasks. Black outlines show typical ROIs.

·Fig 2. Color denotes successful decoding between single or multiple task pairs with clusters pFWE<0.05. A) Subpopulations of decoding accuracy. B) Social task decoding outside typical ROIs.
Conclusions:
Through the lens of socially relevant task demands and high spatial and temporal resolution brain imaging we observe signals that: 1) are manipulated by task demands, 2) show information content outside of typical face areas, 3) may not be resolvable with typical acquisition strategies and 4) are relevant to naturalistic processes. Our depth dependent findings suggest that, for the ventral temporal cortex, averaging over depths may obscure information. Collectively these results underscore fMRI's ability to capture dynamic changes across multiple scales based on moment-to-moment perceptual demands.
Emotion, Motivation and Social Neuroscience:
Social Cognition
Higher Cognitive Functions:
Executive Function, Cognitive Control and Decision Making 1
Modeling and Analysis Methods:
Activation (eg. BOLD task-fMRI)
Classification and Predictive Modeling
Perception, Attention and Motor Behavior:
Perception: Visual 2
Keywords:
Cognition
Consciousness
Emotions
FUNCTIONAL MRI
HIGH FIELD MR
Multivariate
Perception
Vision
Other - top-down; bottom-up; feedforward; feedback
1|2Indicates the priority used for review
Provide references using author date format
Dowdle, L.T. (2021), Clarifying the role of higher-level cortices in resolving perceptual ambiguity using ultra high field fMRI. NeuroImage 227, 117654. https://doi.org/10.1016/j.neuroimage.2020.117654
Haxby, J.V. (2001), Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425–2430. https://doi.org/10.1126/science.1063736
Kanwisher, N. (2006), The fusiform face area: a cortical region specialized for the perception of faces. Philosophical Transactions of the Royal Society B Biol. Sci. 361, 2109–2128. https://doi.org/10.1098/rstb.2006.1934
Stigliani, A. (2015), Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific. Journal of Neuroscience. 35, 12412–12424. https://doi.org/10.1523/JNEUROSCI.4822-14.2015
Vizioli, L., (2021), Lowering the thermal noise barrier in functional brain mapping with magnetic resonance imaging. Nature Communication 12, 5181. https://doi.org/10.1038/s41467-021-25431-8
Supported by National Institutes of Health RF1 MH117015