Quantifying model suitability in the context of ultra-low-field MRI super-resolution

Poster No:

2246 

Submission Type:

Abstract Submission 

Authors:

Levente Baljer1, Yiqi Zhang1, Niall Bourke1, Jessica Ringshaw2, Layla Bradford2, Simone Williams2, Kirsty Donald2, Steven Williams1, František Váša1, Rosalyn Moran1

Institutions:

1King's College London, London, United Kingdom, 2University of Cape Town, Cape Town, Western Cape

First Author:

Levente Baljer  
King's College London
London, United Kingdom

Co-Author(s):

Yiqi Zhang  
King's College London
London, United Kingdom
Niall Bourke  
King's College London
London, United Kingdom
Jessica Ringshaw  
University of Cape Town
Cape Town, Western Cape
Layla Bradford  
University of Cape Town
Cape Town, Western Cape
Simone Williams  
University of Cape Town
Cape Town, Western Cape
Kirsty Donald  
University of Cape Town
Cape Town, Western Cape
Steven Williams  
King's College London
London, United Kingdom
František Váša  
King's College London
London, United Kingdom
Rosalyn Moran  
King's College London
London, United Kingdom

Introduction:

Magnetic resonance imaging (MRI) is integral for assessment of paediatric neurodevelopment, but modern MRI systems are large and expensive. Recent ultra-low-field (ULF) MRI systems such as the 64mT Hyperfine Swoop (Deoni et al., 2021) show great promise in widening MRI accessibility and reducing cost. Imaging at low field strengths comes at the cost of lower spatial resolution and signal-to-noise ratio, although these can be mitigated by deep-learning super-resolution (SR).

A difficulty with SR is that it is an ill-posed inverse problem (Dellanoy, 2020); for each low-resolution input there exist multiple viable high-resolution outputs. Even if a transformation is learned that reliably reproduces all target images in the training data, generation of artifacts is a consistent danger whenever a model runs inference on unseen input (Johnson, 2016). This is a particular concern in medical applications, where outputs may inform diagnosis and treatment planning. We present a technique to quantify the suitability of new input to a pre-trained U-Net for SR of paediatric ULF MRI, to enable the a-priori identification of 'unfit' SR hallucinations.

Methods:

We trained a 3D-U-NET using paired ULF (64mT Hyperfine Swoop T2; 1.5x1.5x5 mm) and high-field (Siemens 3T T2; 1x1x1 mm) MRI scans from 40 subjects aged 3-6 months. Our test data included 10 scans each from cohorts in the following age-groups: 6 months, 12 months, and 4 years, all scanned using the same protocol. We first fed all training and test scans into the encoding layers of the model and extracted activations from the bottleneck layer to obtain a 'latent space representation' (see Figure 1A). We then quantified the dissimilarity between latent features from the unseen test images and those of our training data using the Sinkhorn distance. To evaluate the utility of the Sinkhorn distance in predicting model suitability, we: 1) ran SR for ULF test scan using our model, 2) segmented each output using SynthSeg+ (Billot, 2023) to obtain grey matter (GM), white-matter (WM), and cerebrospinal fluid (CSF) measures, 3) quantified within-subject overlap between segmentations of SR outputs and high-field scans with Dice coefficients, 4) correlated the Sinkhorn distances and Dice scores across subjects.

Results:

We found that the deviation of latent features from the training data, quantified using the Sinkhorn distance, was lower for unseen images of 6-month-old subjects was than 12-month-old subjects. The same trend was observed when comparing unseen scans from 6-month-olds to 4-year-olds, however in both cases the difference was insignificant (p=0.14 and p=0.10, respectively). The correlation between Dice scores of SR outputs and their Sinkhorn distances to the training data was not significant for 6-month-old or 12-month-old subjects, however a significant negative correlation between these variables was observed for 4-year-old subjects across GM (r=-0.86, p=0.0014), WM (r=-0.91, p=0.00027), CSF (r=-0.80, p=0.0058).
Supporting Image: Figure1_final.png
   ·Figure 1
Supporting Image: Figure2_final.png
   ·Figure 2
 

Conclusions:

We aimed to quantify suitability of new, unseen input to a pre-trained SR model. We explored the Sinkhorn distance between latent space representations of images as a candidate for this metric and provide proof of concept analyses with subjects whose age deviates from those used in the training set. Although we did not find significant deviation in mean Sinkhorn distances across cohorts, we found that Sinkhorn distances obtained from 4-year-old scans are negatively correlated to Dice scores indexing SR quality, across tissue types. We speculate that a minimum threshold of deviation from the training set must be reached before the Sinkhorn distance bears sufficient informative potential for model suitability. Further analyses are required to validate this metric and assess its generalisability, including analyses of subjects across a wider range of ages and patients with neurological pathologies.

Lifespan Development:

Early life, Adolescence, Aging

Neuroinformatics and Data Sharing:

Informatics Other 1

Novel Imaging Acquisition Methods:

Imaging Methods Other 2

Keywords:

Development
Machine Learning
MRI
STRUCTURAL MRI
Other - Low-field MRI

1|2Indicates the priority used for review

Provide references using author date format

Billot, B., et al. (2023). SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining. Medical Image Analysis 86: 102789

Delannoy, Q., et al. (2020). SegSRGAN: Super-resolution and segmentation using generative adversarial networks — Application to neonatal brain MRI. Computers in Biology and Medicine 120: 103755

Deoni, S. C. L., Bruchhage, M. M. K., Beauchemin, J., Volpe, A., D'Sa, V., Huentelman, M., & Williams, S. C. R. (2021). Accessible pediatric neuroimaging using a low field strength MRI scanner. Neuroimage, 238, 118273.
doi:https://doi.org/10.1016/j.neuroimage.2021.118273

Johnson, J., Alahi, A., Fei-Fei, L. (2016). Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds) Computer Vision – ECCV 2016. ECCV 2016.