BigBrain Image blind restoration and alignment with generative priors, U-Net and similarity

Poster No:

1930 

Submission Type:

Abstract Submission 

Authors:

mingli zhang1, Paule Toussaint2, Claude Y. Lepage3, Alan Evans4

Institutions:

1mcgill university, montreal, Quebec, 2McGill University, Montreal, Quebec, 3McGill Centre for Integrative Neuroscience (MCIN), Montreal, Quebec, Montreal, Quebec, 4McGill Centre for Integrative Neuroscience (MCIN), Montreal, Quebec

First Author:

mingli zhang  
mcgill university
montreal, Quebec

Co-Author(s):

Paule Toussaint  
McGill University
Montreal, Quebec
Claude Y. Lepage  
McGill Centre for Integrative Neuroscience (MCIN), Montreal, Quebec
Montreal, Quebec
Alan Evans  
McGill Centre for Integrative Neuroscience (MCIN)
Montreal, Quebec

Introduction:

Human brain atlases play a crucial role in providing a spatial framework to organize information derived from diverse research on brains, integrating multimodal and multiresolution images. Recent progress in high-throughput scanning technology, coupled with powerful computing resources, has enabled a higher level of automation in digitizing and analyzing entire sections of the human brain at microscopic resolution.
The development of the high-resolution BigBrain model [1] has paved the way for creating comprehensive maps of cytoarchitectonic regions with full microscopic detail, covering extensive image stacks that span thousands of sections. However, many of the scanned BigBrain sections exhibit unknown degradation, low resolution, and noise, resulting in issues like image blurring and misalignment.
We present a pipeline that depends on two models to effectively address actual image restoration and realignment when the relationship between high resolution and low resolution images is unknown.

Methods:

In the first part, we propose a blind super-resolution model to address the resolution upscaling scenario when the function for mapping high-resolution and low-resolution images is unknown. Our solution relies on three training modules with different learning objectives: 1. a degradation-aware network (U-Net) to synthesize the high resolution image, given a low resolution image and the corresponding blur kernel; 2. a pre-trained generative adversarial network (GAN) to be used as prior, bridged to the U-Net by a latent code mapping and several channel-split spatial feature transforms (CS-SFTs)[2 3]; and 3. a rational polynomial image interpolation into deep convolutional neural networks (CNNs) to retain details.

The second aspect of the pipeline considers the generic problem of dense alignment between two images, whether they be two frames of a video, two widely different views of a scene, two paintings depicting similar content, or serial histological sections. While such task is typically addressed with a domain-specific solution, near-duplicates interpolation or alignment, large blurring still challenges existing methods. To address this issue, we adopt a feature extractor that shares weights across the scales and optimize our network with the Gram matrix loss that measures the correlation difference between features. The fine alignment is then learned in an unsupervised manner by a deep network that optimizes a standard structural similarity metric (SSIM) between the two images. We evaluated the performance of this method on degraded 2D sample patches from 10- and 1-micron sections of the BigBrain dataset, and on natural scene images.

Results:

Fig 1. Flowchart of the proposed approach (a), applied to image alignment on natural scene images (b), and restoration of 1-micron sections of the BigBrain dataset (c ). Generated video with two inputs of 10-micron sections of the BigBrain images. (d) comparisons on PSNR and blind image quality assessment (SoftSplat[3] FILM[4])

Fig 2. Image alignment results on 1-micron (a) and 10-micron sections (b) of the BigBrain dataset.

We observed improved detailed structure in the final restored images at cellular resolution. Scores for spatial quality, naturalness, and perception-based image quality evaluation metrics were greatly improved overall for images restored using our approach compared to the original data. The results for alignment of 1-micron BigBrain images show the superior performance of the proposed approach.
Supporting Image: fig1.png
   ·Figure 1
Supporting Image: fig2.png
   ·Figure 2
 

Conclusions:

Effective blind preprocessing of the degraded BigBrain images can be used for enhancing their quality , addressing issues related to deterioration and misalignment, and ensuring that subsequent analyses and interpretations are based on reliable and accurate data. The resulting improved image quality consequently supports more robust findings in neuroscientific studies.

Modeling and Analysis Methods:

Methods Development 1
Motion Correction and Preprocessing 2

Neuroinformatics and Data Sharing:

Brain Atlases

Keywords:

Atlasing
Machine Learning
Modeling

1|2Indicates the priority used for review

Provide references using author date format

1. Amunts K, Lepage C, Borgeat L, Mohlberg H, Dickscheid T, Rousseau MÉ, Bludau S, Bazin PL, Lewis LB, Oros-Peusquens AM, Shah NJ, Lippert T, Zilles K, Evans AC. BigBrain: an ultrahigh-resolution 3D human brain model. Science. 2013 Jun 21;340(6139):1472-5. doi: 10.1126/science.1235381.
2. Wang, X., Li, Y., Zhang, H., & Shan, Y. (2021). Towards real-world blind face restoration with generativefacial prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp.9168-9178).
3. Bian, S., Xu, X., Jiang, W., Shi, Y., & Sato, T. (2020, October). BUNET: blind medical image segmentation based on secure UNET. In International Conference on Medical Image Computing and Computer-AssistedIntervention (pp. 612-622). Springer, Cham.
4. Niklaus, S., Liu, F.: Softmax splatting for video frame interpolation. In: Proceedings
of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
pp. 5437–5446 (2020)
5. Reda, F., Kontkanen, J., Tabellion, E., Sun, D., Pantofaru, C., & Curless, B. (2022, October). Film: Frame interpolation for large motion. In European Conference on Computer Vision (pp. 250-266). Cham: Springer Nature Switzerland.