Unveiling Dynamics in Standard MRI Sequences through Motion Magnification Techniques

Poster No:

1835 

Submission Type:

Abstract Submission 

Authors:

Zhaoying Pan1, Vidhya Vijayakrishnan Nair1, Qiuting Wen2, Yunjie Tong1, Xiaoqian Wang1

Institutions:

1Purdue University, West Lafayette, IN, 2Indiana University School of Medicine, Indianapolis, IN

First Author:

Zhaoying Pan  
Purdue University
West Lafayette, IN

Co-Author(s):

Vidhya Vijayakrishnan Nair  
Purdue University
West Lafayette, IN
Qiuting Wen  
Indiana University School of Medicine
Indianapolis, IN
Yunjie Tong  
Purdue University
West Lafayette, IN
Xiaoqian Wang  
Purdue University
West Lafayette, IN

Introduction:

Amplified Magnetic Resonance Imaging (aMRI) [1-3] employs motion magnification techniques [4-8] developed for natural videos in the context of MRI scans, enabling improved observation of nuanced motion cycles within the brain. Previous research [1-3] has experimented with diverse motion magnification methods on cine MRI, a specialized MRI variant offering detailed high-resolution scans. However, due to challenges in the data collection and limitation to specific frequency, the impact of aMRI has been limited. In our study, we developed a new post-processing technique to amplify motion cycles in EPI-based fMRI data. This approach allows for better understanding of minor brain and lateral ventricle movements during both resting state and task-based fMRI scans. Our method, benefiting from the widespread availability of fMRI data, promises broader applicability in both clinical and research contexts.

Methods:

MRI data from participants were acquired using a 3T SIEMENS MRI scanner (Magnetom Prisma, Siemens Medical Solutions, Erlangen, Germany) equipped with a 64-channel head coil. Additionally, participants wore chest belts to record respiration signals. Subsequently, extracting slices to enhance lateral ventricle observation and rescaling for higher contrast were applied to 3D brain scan volumes. These slices were then compiled along the temporal axis to construct a video for further magnification. In contrast to utilizing state-of-the-art learning-based methods [7-8], which may be impractical due to the low resolution and limited dataset availability in general MRI, we opted for the phase-based magnification technique [6]. This technique, whose performance stands out among non-learning-based methods [4-6], leveraging complex steerable pyramids, decomposes the video to separate phase from amplitude, enabling the amplification of temporally-bandpassed phases.
Evaluating the results of our motion magnification technique is challenging because real-world videos lack a standard for comparison. To address this, we used two methods of assessment: digital simulations and data from a breath-holding experiment, along with the "y-t slices" method [4-8], to visualize motion with static images by extracting one-pixel-width/height slices from a fixed position and concatenating these slices from all frames in a video. For our simulation, we represented brain motion with an oval shape and lateral ventricles with an "X" shape. The motion is simulated with the sum of sine functions at 0.3Hz and 1Hz (respiration and heartbeat), affecting the oval's size and the "X" shape's width. Additionally, we used two metrics for simulation results: the structural similarity index measure (SSIM) [9] and mean squared error (MSE). In the breath-holding experiment, we induced 6 cycles of brain dynamics using periodic hypercapnia.
Supporting Image: figure1.png
 

Results:

Our simulation results, illustrated in Figure 2, closely matched our simulated standard, achieving an SSIM score of 0.954 and an MSE of 3.186 indicating the similarity between the groundtruth simulation and amplified result. These results confirm our method's ability to effectively magnify motion patterns at different frequency bands. Additionally, in experiments where subjects held their breath, our method successfully identified motion signals corresponding to the exact 6 breath-holding cycles, with precise timing of when the breath-holding started and stopped. By magnifying the data collected from these experiments, we were able to clearly observe breathing patterns. This demonstrates the effectiveness of our method in working with real-world fMRI data.
Supporting Image: figure2.png
 

Conclusions:

Our results demonstrate the success of the magnification approach in emphasizing minor movements in the brain and lateral ventricles in standard fMRI data, observable across various frequency bands. The adaptability of our approach to fMRI makes it applicable to a wide range of labs and clinics, offering practical benefits for diagnosis and analysis.

Modeling and Analysis Methods:

Image Registration and Computational Anatomy 1
Methods Development 2

Keywords:

Cerebro Spinal Fluid (CSF)
Computing
Data analysis
fMRI CONTRAST MECHANISMS
FUNCTIONAL MRI
MRI
Open-Source Code
Vision
Workflows

1|2Indicates the priority used for review

Provide references using author date format

[1] Holdsworth, S. J. (2016). Amplified magnetic resonance imaging (aMRI). Magnetic resonance in medicine, 75(6), 2245-2254.
[2] Terem, I. (2018). Revealing sub‐voxel motions of brain tissue using phase‐based amplified MRI (aMRI). Magnetic resonance in medicine, 80(6), 2549-2559.
[3] Terem, I. (2021). 3D amplified MRI (aMRI). Magnetic Resonance in Medicine, 86(3), 1674-1686.
[4] Liu, C. (2005). Motion magnification. ACM transactions on graphics (TOG), 24(3), 519-526.
[5] Wu, H. (2012). Eulerian video magnification for revealing subtle changes in the world. ACM transactions on graphics (TOG), 31(4), 1-8.
[6] Wadhwa, N. (2013). Phase-based video motion processing. ACM Transactions on Graphics (ToG), 32(4), 1-10.
[7] Oh, T. H. (2018). Learning-based video motion magnification. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 633-648).
[8] Pan, Z. (2023). Self-Supervised Motion Magnification by Backpropagating Through Optical Flow. In Thirty-seventh Conference on Neural Information Processing Systems.
[9] Wang, Z. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.