Poster No:
1891
Submission Type:
Abstract Submission
Authors:
Jiaying Lin1, Zhuoshuo Li1, Youbing Zeng1, Xiaobo Liu1, Xinting Ge2, Minhua Lu3, Mengting Liu1
Institutions:
1Sun Yat-sen University, Shenzhen, Guangdong, 2Shandong Normal University, Jinan, Shandong, 3Shenzhen University, Shenzhen, Guangdong
First Author:
Co-Author(s):
Xiaobo Liu
Sun Yat-sen University
Shenzhen, Guangdong
Xinting Ge
Shandong Normal University
Jinan, Shandong
Minhua Lu
Shenzhen University
Shenzhen, Guangdong
Introduction:
While pursuing advanced AI-guided MRI diagnostics, there's a growing need for diverse brain imaging datasets. However, variations in data collection methods across sites can introduce biases in subsequent analyses. MRI harmonization methods exist but are primarily tailored for 2D slices, causing inconsistencies when applied to 3D volumes. The extension of 2D MRI harmonization to 3D images poses several challenges: 1) 3D images have a larger voxel count. 2) The availability of 3D training instances is relatively limited compared to 2D. 3) Many machine learning methods are computationally demanding and struggle to stabilize training processes. Our approach uses a GAN-based method integrating optical flow information to supervise newly generated 2D images. Focusing on slice relationships, we've developed an unsupervised spatial loss to enhance coherence.
Methods:
In our study, we introduce an unsupervised model aiming to address inter-slice variations in 3D MRI harmonization. Leveraging optical flow-based translation and GAN, our model effectively aligns MR images. Fig. 1(a) illustrates our model architecture, comprising three key modules: (i) A dual-domain generator synthesizing MR images in the target domain. (ii) A dual-domain discriminator distinguishing real from generated images. (iii) A warping operation W that generates subsequent slices by utilizing the current slice and optical flow.
Initially, we segment 3D MR volumes into slices, treating the tissue between neighboring slices as minimally deformed objects. While extracting anatomical details from the source domain, we extract irrelevant biological data from the target domain for harmonized images. Through the generator, our model conducts domain transformation and employs warping operations to generate the next slice. We supervise these operations using Recycle Consistency Loss and Recycle Consistency Loss to enhance the model's effectiveness.

·Fig 1. The architecture of our 3D-translation model.
Results:
For qualitative and quantitative comparisons, we trained and tested our model using datasets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) (paired and unpaired), UK Biobank (UKBB), and the Nathan Kline Institute-Rockland Sample (NKI-RS).
In Fig. 2A, we demonstrate the evaluation of inter-slice stability using the warping error metric to assess our model. It is evident that among the methods considered, the results obtained from our approach surpass those achieved by both 2D techniques and most of the 3D methodologies.
In Fig. 2B, we conducted a quantitative evaluation of images from both domains using three image similarity metrics. On average, our model significantly outperforms other methods, showcasing considerable improvements of 1.94dB in Peak Signal-to-Noise Ratio, 27.19% in squared Maximum Mean Discrepancy, and 1.03% in Multi-Scale Structural Similarity.

·Fig 2. Quantitative comparison
Conclusions:
Inspired in [1], we present an unsupervised 3D MRI harmonization method that minimizes inter-slice variations, maintains anatomical details, and ensures consistent MRI styles using optical flow and two consistency losses. Notably, it's computationally efficient for resource-limited settings compared to 3D methods. However, our model currently supports transformations between only two domains and lacks multi-domain capabilities like [2], [3], [4]. Future work will focus on enhancing this aspect.
Modeling and Analysis Methods:
Methods Development 1
Other Methods 2
Keywords:
Learning
MRI
1|2Indicates the priority used for review
Provide references using author date format
[1] W. Wang, S. Yang, J. Xu, and J. Liu, “ Consistent video style transfer via relaxation and regularization,” IEEE Transactions on Image Processing, vol. 29, pp. 9125–9139, 2020, DOI: 10.1109/TIP.2020.3024018.
[2] Y. Choi, Y. Uh, J. Yoo, and J.-W. Ha, “J.-W. Ha, “StarGAN v2: Diverse image synthesis for multiple domains,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8185–8194, 2020, DOI: 10.1109/CVPR42600.2020.00821.
[3] W. T. Clarke et al., “Multi-site harmonization of 7 tesla MRI neuroimaging protocols,” NeuroImage, vol. 206, p. 116335, 2020, DOI:10.1016/j.neuroimage.2019.116335.
[4] M. Liu et al., “Style transfer generative adversarial networks to harmonize multisite MRI to a single reference image to avoid overcorrection,”Hum Brain Mapp. 2023, vol. 44(14), pp. 4875-4892, Oct, 2023, DOI: 10.1002/hbm.26422.