Poster No:
1688
Submission Type:
Abstract Submission
Authors:
Yongseon Yoo1, Seonggyu Kim2, Hanyeol Yang1, Jihwan Min1, Jong-Min Lee1,2
Institutions:
1Department of Artificial Intelligence, Hanyang University, Seoul, Korea, Republic of, 2Department of Electronic Engineering, Hanyang University, Seoul, Korea, Republic of
First Author:
Yongseon Yoo
Department of Artificial Intelligence, Hanyang University
Seoul, Korea, Republic of
Co-Author(s):
Seonggyu Kim
Department of Electronic Engineering, Hanyang University
Seoul, Korea, Republic of
Hanyeol Yang
Department of Artificial Intelligence, Hanyang University
Seoul, Korea, Republic of
Jihwan Min
Department of Artificial Intelligence, Hanyang University
Seoul, Korea, Republic of
Jong-Min Lee
Department of Artificial Intelligence, Hanyang University|Department of Electronic Engineering, Hanyang University
Seoul, Korea, Republic of|Seoul, Korea, Republic of
Introduction:
MRI brain scans are essential for diagnosing neurological diseases, and deep learning models trained on data from multiple sites face performance issues due to differences in MR images, known as "scanner effect." To mitigate this, "harmonization" techniques are used, with recent advances in generative models showing better performance in minimizing these variations compared to traditional methods[9, 8]. We plan to apply the Style Transfer technique using GANs for the brain MRI harmonization task across different sites. In GAN models, the AdaIN (Adaptive Instance Normalization)[4] method is used for applying style[6], which changes the first order statistics to impart style. We have enhanced our generative model (GAN) to better learn the style of target MRI images by incorporating the loss function technique from Neural Style Transfer (NST)[2] that considers second order statistics[5] using the Gram Matrix. This approach allows our model to consider both first and second order statistics in the process of learning and applying style. Additionally, to preserve the anatomical structure of the original MRI images, we have incorporated a loss function that reduces the difference in feature maps between the original and generated images, in addition to the existing cycle consistency loss. Our method applies Neural Style Transfer to GANs for performing cross-site brain MRI harmonization, offering an enhanced style application approach compared to existing methods.
Methods:
We propose a method to enhance GANs by adding loss functions from Neural Style Transfer[2]. We employ the StarGANv2[1, 7] model for this purpose. To calculate NST's style and content loss, we integrate a pre-trained VGG19[3] into the existing GAN model structure. The style loss is computed by passing the reference image and the generated target image through VGG19, creating Gram Matrices from the feature maps obtained after the first convolution in each layer, and then minimizing the difference between these two matrices. The content loss is determined by passing the source image and generated target image through VGG19, then reducing the difference between the feature maps produced after the second convolution. Figure shows our network. The architecture is composed as follows: a generator that creates images, a mapping network that converts latent code z into a style code, a style encoder that converts an image into a style code, discriminator and Pre-trained VGG19.
The training involves several loss functions: an adversarial loss (L_adv) responsible for the training of a generator and discriminator; a style reconstruction loss (L_sty) responsible for the training of a style encoder; a style diversification loss (L_ds) ensuring a style encoder produces various style codes; and a cycle consistency loss (L_cyc) that regenerates an original image from a generated image for comparison. Additionally, there is a NST style loss (L_(NST-S)) comparing the reference image's gram matrix with the generated target image's gram matrix, and a NST content loss (L_(NST-C)) comparing the reference image's feature map with the generated target image's feature map.

·Harmonization Model Architecture
Results:
Our cross-site brain MRI harmonization method was assessed using quantitative metrics, namely Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Map (SSIM) scores.

·Visual comparison and PSNR, SSIM scores
Conclusions:
In conclusion, our novel approach to cross-site brain MRI harmonization, integrating Neural Style Transfer with Generative Adversarial Networks, demonstrated improvements in two scores. Our methodology, incorporating both first and second-order statistics, offers a comprehensive solution for style transfer and preserving anatomical structure.
Modeling and Analysis Methods:
Exploratory Modeling and Artifact Removal 1
Other Methods 2
Keywords:
Data analysis
Data Organization
Machine Learning
Modeling
MRI
Other - synthesis
1|2Indicates the priority used for review
Provide references using author date format
[1] Choi, Yunjey, Youngjung Uh, Jaejun Yoo and Jung-Woo Ha. “StarGAN v2: Diverse Image Synthesis for Multiple Domains.” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019): 8185-8194.
[2] Gatys, Leon A., Alexander S. Ecker and Matthias Bethge. “Image Style Transfer Using Convolutional Neural Networks.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016): 2414-2423.
[3] Geirhos, Robert, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix Wichmann and Wieland Brendel. “ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.” ArXiv abs/1811.12231 (2018): n. pag.
[4] Huang, Xun and Serge J. Belongie. “Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization.” 2017 IEEE International Conference on Computer Vision (ICCV) (2017): 1510-1519.
[5] Julesz, Béla. “Visual Pattern Discrimination.” IRE Trans. Inf. Theory 8 (1962): 84-92.
[6] Karras, Tero, Samuli Laine and Timo Aila. “A Style-Based Generator Architecture for Generative Adversarial Networks.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2018): 4396-4405.
[7] Liu, M., et al. "Style Transfer Using Generative Adversarial Networks for Multi-site MRI Harmonization." In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, edited by M. de Bruijne, et al., Lecture Notes in Computer Science, vol. 12903. Springer, Cham, 2021. https://doi.org/10.1007/978-3-030-87199-4_30.
[8] Shinohara, Russell T., Elizabeth M. Sweeney, Jeff Goldsmith, Navid Shiee, Farrah J. Mateen, Peter A. Calabresi, Samson Jarso, Dzung L. Pham, Daniel S. Reich and Ciprian M. Crainiceanu. “Statistical normalization techniques for magnetic resonance imaging.” NeuroImage : Clinical 6 (2014): 9 - 19.
[9] Wrobel, Julia, M. L. Martin, Rohit Bakshi, Peter A. Calabresi, Mark Elliot, David R. Roalf, Ruben C. Gur, Raquel E. Gur, Roland G. Henry, Govind Nair, Jiwon Oh, Nico Papinutto, Daniel Pelletier, Daniel S. Reich, William D. Rooney, Theodore Daniel Satterthwaite, William Stern, Karthik Prabhakaran and Jeff Goldsmith. “Intensity warping for multisite MRI harmonization.” NeuroImage 223 (2019): n. pag.