Structural MRI synthesis of human fetal brain using a generative adversarial network

Poster No:

1872 

Submission Type:

Abstract Submission 

Authors:

Yunseo Park1, YEONGJUN PARK2, Bo-yong Park1,3,4

Institutions:

1Department of Data Science, Inha University, Incheon, Republic of Korea, 2Department of Computer Engineering, Inha University, Incheon, Republic of Korea, 3Department of Statistics and Data Science, Inha University, Incheon, Republic of Korea, 4Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea

First Author:

Yunseo Park  
Department of Data Science, Inha University
Incheon, Republic of Korea

Co-Author(s):

YEONGJUN PARK  
Department of Computer Engineering, Inha University
Incheon, Republic of Korea
Bo-yong Park  
Department of Data Science, Inha University|Department of Statistics and Data Science, Inha University|Center for Neuroscience Imaging Research, Institute for Basic Science
Incheon, Republic of Korea|Incheon, Republic of Korea|Suwon, Republic of Korea

Introduction:

Structural magnetic resonance imaging (MRI) facilitates the study of brain anatomy in vivo. Multimodal imaging data, such as T1- (T1w) and T2-weighted (T2w) MRI, provide rich information for understanding the brain; however, obtaining both T1w and T2w MRI is problematic because it is a time-consuming and expensive task. The image synthesis approach can mitigate this issue by generating T1w MRI from the T2w or vice versa. This technique has been conducted in many prior works using medical imaging data (Chira et al., 2022; Nie et al., 2018). For instance, a study utilized a Generative Adversarial Network (GAN) to augment brain MRI scans with tumors, generating other tumor-inclusive MRI modality data (Huang et al., 2021; Osokin et al.,). GAN comprises a generator and discriminator (Goodfellow et al., 2014). The generator is trained to prevent the discriminator from classifying data as fake or real, while the discriminator is trained to improve its accuracy in distinguishing actual from counterfeit images. Thus, the generator and discriminator are trained adversarily. In this study, we propose a synthesis model for generating T1w MRI from T2w images of the fetus using a GAN model.

Methods:

We obtained T1w and T2w structural MRI of 418 individuals (mean ± standard deviation [SD] gestational age = 40.60 ± 2.27 weeks; 44.5% female) from the Developing Human connectome Project (dHCP) (Makropoulos et al., 2018). The structural MRI data were preprocessed using HCP minimal preprocessing pipelines (Glasser et al., 2013). We constructed a conditional GAN model to generate T1w MRI from T2w data. We utilized a pix2pix model (Isola et al., 2016), consisting of the generator based on the U-Net and the discriminator based on the PatchGAN, and modified the original two-dimensional (2D) model to 3D architecture (Fig. 1A). Subjects were randomly divided into training (n = 267), validation (n = 67), and test datasets (n = 84). The performance of the model was evaluated using mean squared error (MSE) between the actual and synthesized T1w images after the intensity normalization with the range between 0 and 1.

Results:

We observed that the training and validation losses decreased dramatically around ten epochs and saturated around 30 epochs (Fig. 1B). We selected the model at 112 epochs, which showed the minimum test loss. We found that the MSE between the actual and synthesized T1w images of the test subjects was 0.0022 ± 0.001 (mean ± SD).

Conclusions:

We proposed a GAN-based model synthesizing T1w data using only T2w MRI, which is optimized for fetal brains. Our pipeline may foster future multimodal neuroimaging studies in fetuses.

Funding:
This work was supported by the National Research Foundation of Korea (NRF-2021R1F1A1052303; NRF-2022R1A5A7033499), Institute for Information and Communications Technology Planning and Evaluation (IITP) funded by the Korea Government (MSIT) (No. 2022-0-00448, Deep Total Recall: Continual Learning for Human-Like Recall of Artificial Neural Networks; No. RS-2022-00155915, Artificial Intelligence Convergence Innovation Human Resources Development (Inha University); No. 2021-0-02068, Artificial Intelligence Innovation Hub), and Institute for Basic Science (IBS-R015-D1).

Modeling and Analysis Methods:

Methods Development 1
Other Methods

Neuroanatomy, Physiology, Metabolism and Neurotransmission:

Neuroanatomy Other

Novel Imaging Acquisition Methods:

Anatomical MRI 2

Keywords:

STRUCTURAL MRI
Other - GAN, T1 image synthesis

1|2Indicates the priority used for review
Supporting Image: OHBM_fig1.JPG
 

Provide references using author date format

Chira, D., Haralampiev, I., Winther, O., Dittadi, A., & Liévin, V. (2022). Image Super-Resolution With Deep Variational Autoencoders. http://arxiv.org/abs/2203.09445
Glasser, M. F., Sotiropoulos, S. N., Wilson, J. A., Coalson, T. S., Fischl, B., Andersson, J. L., Xu, J., Jbabdi, S., Webster, M., Polimeni, J. R., Van Essen, D. C., & Jenkinson, M. (2013). The minimal preprocessing pipelines for the Human Connectome Project. NeuroImage, 80, 105–124. https://doi.org/10.1016/j.neuroimage.2013.04.127
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Networks. http://arxiv.org/abs/1406.2661
Huang, P., Liu, X., & Huang, Y. (2021). Data Augmentation For Medical MR Image Using Generative Adversarial Networks. http://arxiv.org/abs/2111.14297
Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A. A. (2016). Image-to-Image Translation with Conditional Adversarial Networks. http://arxiv.org/abs/1611.07004
Makropoulos, A., Robinson, E. C., Schuh, A., Wright, R., Fitzgibbon, S., Bozek, J., Counsell, S. J., Steinweg, J., Vecchiato, K., Passerat-Palmbach, J., Lenz, G., Mortari, F., Tenev, T., Duff, E. P., Bastiani, M., Cordero-Grande, L., Hughes, E., Tusor, N., Tournier, J.-D., … Rueckert, D. (2018). The Developing Human Connectome Project: a Minimal Processing Pipeline for Neonatal Cortical Surface Reconstruction Europe PMC Funders Group. Neuroimage, 173, 88–112. https://doi.org/10.1101/125526
Nie, D., Trullo, R., Lian, J., Wang, L., Petitjean, C., Ruan, S., Wang, Q., & Shen, D. (2018). Medical Image Synthesis with Deep Convolutional Adversarial Networks. IEEE Transactions on Biomedical Engineering, 65(12), 2720–2730. https://doi.org/10.1109/TBME.2018.2814538
Osokin, A., Ens, I. /, Hse, F., Anatole, R., Chesseí, C., Salas, R. E. C., & Vaggi, F. (n.d.). GANs for Biological Image Synthesis. https://github.com/soumith/ganhacks