Synthetic generation of FDG-PET from T1-weighted MRI

Poster No:

1882 

Submission Type:

Abstract Submission 

Authors:

Debabrata Mishra1, Zhaolin Chen2,3, Kh Tohidul Islam2, Patrick Kwan1,4, Meng Law1,5, Lucy Vivash1,4, Ben Sinclair1,4

Institutions:

1Dept. of Neuroscience, Monash University, Melbourne, Australia, 2Monash Biomedical Imaging, Monash University, Melbourne, Australia, 3Dept. of DSAI, Monash University, Melbourne, Australia, 4Dept. of Neurology, The Alfred Hospital, Melbourne, Australia, 5Dept. of Radiology, The Alfred Hospital, Melbourne, Australia

First Author:

Debabrata Mishra  
Dept. of Neuroscience, Monash University
Melbourne, Australia

Co-Author(s):

Zhaolin Chen  
Monash Biomedical Imaging, Monash University|Dept. of DSAI, Monash University
Melbourne, Australia|Melbourne, Australia
Kh Tohidul Islam  
Monash Biomedical Imaging, Monash University
Melbourne, Australia
Patrick Kwan  
Dept. of Neuroscience, Monash University|Dept. of Neurology, The Alfred Hospital
Melbourne, Australia|Melbourne, Australia
Meng Law  
Dept. of Neuroscience, Monash University|Dept. of Radiology, The Alfred Hospital
Melbourne, Australia|Melbourne, Australia
Lucy Vivash  
Dept. of Neuroscience, Monash University|Dept. of Neurology, The Alfred Hospital
Melbourne, Australia|Melbourne, Australia
Ben Sinclair  
Dept. of Neuroscience, Monash University|Dept. of Neurology, The Alfred Hospital
Melbourne, Australia|Melbourne, Australia

Introduction:

Fluorodeoxyglucose-positron emission tomography(FDG-PET) is a valuable tool for the diagnosis and management of a variety of brain disorders, including Alzheimer's disease (AD), frontotemporal dementia (FTD), and dementia with Lewy bodies (DLB) as well as malignancy3,5. However, its high cost and limited availability often pose significant challenges. Using generative AI to synthesise FDG-PET from T1-weighted MRI, a more widely accessible and affordable imaging modality, offers a promising solution to overcome these challenges. This promising technique could provide complementary information latent in the MRI images, that would usually require FDG-PET to attain, thereby enhancing diagnostic accuracy and treatment planning, leading to improved patient outcomes and overall healthcare efficiency.

Methods:

Dataset and image preprocessing
We acquired MRI and FDG-PET data from 1146 patients from ADNI(Alzheimer's Disease Neuroimaging Initiative) diagnosed as MCI, AD and CN. FDG-PET images with multiple time frames were averaged to give a single FDG-PET per subject. MRI and PET images were resized to a dimension of 170x170x170.

Model Training
Our study utilised a cycle-consistent Generative adversarial network(GAN)6 framework with ResNet generator networks comprising of 10 residual blocks and PatchGAN discriminators to unpaired translation of T1-weighted MRI into synthesised FDG-PET scans. The ResNet generator uses skip connections to combine multi-scale features from each residual block, which helps improve detail and image quality in the translation. Compared to previous approaches, this deeper ResNet architecture has increased representational capacity to capture the intricate mappings between MRI and PET modalities. For the discriminator networks we use 128x128x32 PatchGANs, which aim to classify whether overlapping subdivisions (patches) of the full image, of size 128x128x32 are real or fake. Using this architecture, the model will synthesise a FDG-PET from an input MRI and vice-versa.

Scoring System
A five-point visual grading scale was adapted from equivalent MRI quality rating scales1,2,4 to measure image quality of FDG-PET: clarity, structural definition, contrast and brightness, and artefacts, on a scale of 1(lowest quality) to 5(highest quality). An imaging scientist (LV) reviewed original/generated pairs of images from randomly selected subjects. Reviewers were blinded to the type of image (real or generated). A nonparametric Mann-Whitney U test was employed to discern any significant differences between the generated and real image groups.

Results:

A visual comparison and evaluation of the images using the five-point scoring system was performed on 40 images, with 20 images generated using the GAN and 20 real PET scans(Table 1). The results shows significant statistical differences across two of the criteria : Contrast and Brightness(U=40.0, p<0.0001), Artefacts(U=102.5, p=0.001) than the rest: Image Clarity(U=200.0, p=1.0), Structural Definition(U=200.0, p=1.0) and Overall Diagnostic Usability(U=170.0, p=0.08), indicating improved quality of the synthesised PET images than the original PET scans(Figure 1). This apparent improvement may be due to an implicit denoising i.e. the model doesn't add back in the noise or artefacts, as it cannot predict these from MRI.

Conclusions:

The synthesised FDG-PET scans exhibit highly similar image quality metrics as the ground truth images. This is the first step in generating useful images. To determine whether they generate individual level diagnostic information, future work will characterise disease-related FDG-PET abnormalities in the generated images, in e.g. AD and Epilepsy groups, to see if such FDG-PET abnormalities can be extracted from MRI input data alone. By generating surrogate FDG-PET imaging where PET scanners are unavailable, our model has the potential to potentially expand the access, reduce costs, and improve diagnoses.

Modeling and Analysis Methods:

Methods Development 1
PET Modeling and Analysis 2

Novel Imaging Acquisition Methods:

PET

Keywords:

Machine Learning
Modeling
MRI
Positron Emission Tomography (PET)
STRUCTURAL MRI
Other - Generative modeling

1|2Indicates the priority used for review
Supporting Image: MRI-PET-fig2.png
   ·Figure 1 : (Top): Figure showing real FDG-PET scan, (Bottom): Figure showing generated FDG-PET scan
Supporting Image: MRI-PET-results.png
   ·Table 1 : Table showing basic statistics of different aspects of the qualitative analysis as per the five-point visual grading scale on (A) : original PET scans, (B): generated PET scans
 

Provide references using author date format

1. Burmeister, H. P., Baltzer, P. A. T., Möslein, C., Bitter, T., Gudziol, H., Dietzel, M., ... & Kaiser, W. A. (2011), 'Visual grading characteristics (VGC) analysis of diagnostic image quality for high resolution 3 Tesla MRI volumetry of the olfactory bulb', Academic Radiology, 18(5), 634-639.
2. Ludewig, E., Richter, A., & Frame, M. (2010), 'Diagnostic imaging–evaluating image quality using visual grading characteristic (VGC) analysis', Veterinary research communications, 34, 473-479.
3. Mosconi, L., Mistur, R., Switalski, R., Tsui, W. H., Glodzik, L., Li, Y., ... & de Leon, M. J. (2009), 'FDG-PET changes in brain glucose metabolism from normal cognition to pathologically verified Alzheimer’s disease', European journal of nuclear medicine and molecular imaging, 36, 811-822.
4. Pawar, K., Chen, Z., Seah, J., Law, M., Close, T., & Egan, G. (2020), 'Clinical utility of deep learning motion correction for T1 weighted MPRAGE MR images', European Journal of Radiology, 133, 109384.
5. Silverman, D. H. (2004), 'Brain 18F-FDG PET in the diagnosis of neurodegenerative dementias: comparison with perfusion SPECT and with clinical evaluations lacking nuclear imaging', Journal of Nuclear Medicine, 45(4), 594-607.
6. Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017), 'Unpaired image-to-image translation using cycle-consistent adversarial networks', In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).