Cross-Modal Synthesis of Functional Network Connectivity and Magnetic Resonance Imaging Data

Poster No:

1472 

Submission Type:

Abstract Submission 

Authors:

Reihaneh Hassanzadeh1, Vince Calhoun2

Institutions:

1Georgia Institute of Technology, Atlanta, GA, 2GSU/GATech/Emory, Decatur, GA

First Author:

Reihaneh Hassanzadeh  
Georgia Institute of Technology
Atlanta, GA

Co-Author:

Vince Calhoun  
GSU/GATech/Emory
Decatur, GA

Introduction:

Brain disorders such as Alzheimer's disease (AD) can be characterized by brain imaging methods such as structural MRI (sMRI) and functional MRI (fMRI). While providing several modalities, many imaging datasets may have missing modalities for many subjects. This missing modality issue limits multi-modal analysis, where all the modalities of a subject are needed. Hence, we study the possibility of the challenging cross-modality transition task where sMRI images are transformed into fMRI images and vice versa. More specifically, we utilized generative deep learning methods to generate sMRI from functional network connectivity (FNC), which are feature maps extracted from fMRI, and FNC from sMRI. Our results show that our generative deep learning approach can generate samples close to the real images.

Methods:

We studied 982 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Among the 982 subjects, 305 of them have both of the modalities (i.e., sMRI and FNC), 920 have an sMRI image, and 413 subjects have FNC maps. Furthermore, FNC is a correlation map that is measured as follows: We applied a spatially constrained group ICA to the data to extract 53 time courses and then computed the Pearson correlation between the time courses to form FNC. For the generative model, we used cycle GAN [1] while adapting to transforming 1D flattened FNC maps to 3D-sMRI images and vice versa. For translating 3D-sMRI images to 1D-FNC maps, we used 3D-CNN layers to learn spatial information in the sMRI images and transform them into FNC features using fully connected layers.

Results:

Figure 1 shows the real sMRI and generated sMRI by the GAN model, which is averaged across all AD samples and controls (CN). According to the figure, the GAN model could effectively generate samples that resemble sMRI while it used FNC information. Moreover, comparing the AD to CN samples, the generated samples could show the AD patterns, i.e., brain atrophy, similar to the pattern in the real data. Furthermore, Figure 2 shows the averaged FNC maps for real and generated samples. According to the figure, the model could learn the general FNC structure while transforming sMRI images into FNC maps and generating realistic FNC maps. Comparing AD to CN suggests that some of the reduced connectivity in the AD group was captured by the model while generating the FNC maps, such as the connectivity between sensory-motor (SM) and subcortical (SC) networks and the connectivity between SM and cerebellum (CB) networks.
Supporting Image: Figure1.png
   ·Figure 1. Real and Generated sMRI: AD vs. CN
Supporting Image: Figure2.png
   ·Figure 2. Real and Generated FNC: AD vs. CN
 

Conclusions:

In summary, our study demonstrates the effective use of generative deep learning models, specifically cycle GANs, to translate structural MRI to functional network connectivity data and vice versa. We analyzed the ADNI database and could successfully generate realistic sMRI images from FNC maps and vice versa, capturing key pathological features of Alzheimer's disease. This achievement addresses the challenge of missing modalities in brain imaging datasets. It opens new paths for enhanced multi-modal analysis in neuroscience research, particularly in understanding and diagnosing neurodegenerative diseases like Alzheimer's.

Disorders of the Nervous System:

Neurodegenerative/ Late Life (eg. Parkinson’s, Alzheimer’s) 2

Modeling and Analysis Methods:

Classification and Predictive Modeling 1

Keywords:

Machine Learning

1|2Indicates the priority used for review

Provide references using author date format

[1] Zhu, J.Y., Park, T., Isola, P. and Efros, A.A., 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).