Poster No:
1952
Submission Type:
Abstract Submission
Authors:
Amanda Mejia1, Damon Pham2, Thomas Nichols3, B. T. Thomas Yeo4
Institutions:
1Indiana University, Bloomington, IN, 2Indiana University Bloomington, Bloomington, IN, 3University of Oxford, Oxford, United Kingdom, 4National University of Singapore, Singapore, Singapore
First Author:
Co-Author(s):
Damon Pham
Indiana University Bloomington
Bloomington, IN
Introduction:
To mitigate the influence of motion on fMRI analysis, scrubbing is commonly used to exclude volumes with excessive head motion [5,7]. Stringent motion scrubbing (e.g. FD > 0.2mm) is often endorsed, but at what cost? As increasingly subtle noise is removed, the risk of signal loss grows. Introductory statistics teaches us that the error of an estimate is determined by two factors: population variance and sample size. Considering volumes as samples from a population, scrubbing reduces variance while decreasing sample size. These two competing forces may ultimately improve or worsen estimation error. Here, we examine the effect of motion scrubbing on estimation error of FC. We quantify the increase in scan time required to maintain accuracy due to over-scrubbing, with implications for data collection budgets and sample sizes.
Methods:
For subject i and edge j, let w(T,i,j) be FC based on T volumes, which is observed with noise:
w(T,i,j) = x(i,j) + e(T,i,j)
where x(i,j) is the truth. The error variance will be affected by autocorrelation and other factors [2], but we can assume that it is inversely related to T:
Var{e(T,i,j)} = v(j)/T
The "baseline variance" v(j) reflects noise levels in the underlying data and is reduced by scrubbing. We use the HCP retest set (n=42) to determine this baseline variance for different scrubbing methods. Following the procedure in [4], we establish "ground truth" FC x(i,j) using nearly 2h of rest fMRI (minus a 10-min holdout set), which we stringently scrub using a lagged, filtered version of FD for multi-band data [3,4,6]. We apply different scrubbing methods m to the held-out data to produce FC estimates w(Tmi,i,j). Var{e(T,i,j)} can be estimated based on the squared errors w(Tmi,i,j)- x(i,j), then multiplied by Tmi and averaged over i to estimate the baseline variance v(j,m). We consider FD thresholds of 0.2-0.5mm (FD2-FD5) and the DVARS dual threshold [1].
We assess the impact of scrubbing on accuracy and scan time required. For accuracy, we use mean squared error (MSE) relative to the ground truth. For scan time, we define the budget inflation factor (BIF) of method m versus m' as the increase in total scan time required to achieve the same accuracy. BIF reduces to a simple form:
v(j,m)/v(j,m') × d(m)/d(m')
d(m) is the average increase in acquired scan time required to have T volumes post-scrubbing. Thus, the BIF is related to the difference between two scrubbing methods' baseline variance and their censoring rates.
Results:
Fig. 1 shows that motion scrubbing decreases baseline variance, with stringent motion scrubbing reducing it the most (A). However, stringent motion scrubbing results in substantially greater data loss than other methods (B). As a result, stringent motion scrubbing (FD2) actually worsens FC accuracy, while more lenient motion scrubbing improves it (C, D). There is a U-shaped relationship between data removal and error levels, illustrating the risk of over-scrubbing (E). Fig. 2 shows that over-scrubbing results in the need to collect more data, while more lenient scrubbing can facilitate shorter scans (A-B). The BIF of stringent (FD2) versus more lenient (FD4) motion scrubbing is over 8%.

·Figure 1. Stringent motion scrubbing reduces baseline variance but also substantially reduces sample size, ultimately leading to worse accuracy.

·Figure 2. Stringent motion scrubbing requires increased scan time to maintain accuracy to make up for signal loss, with an estimated 8% more scan time over more lenient motion scrubbing.
Conclusions:
A basic statistical tenant is that the level of error in estimates based on a sample from a population is determined by population variance and sample size. Motion scrubbing affects both factors, and may therefore help or hinder FC, BWAS, and other fMRI analyses. We find that while stringent motion scrubbing reduces baseline variance, it worsens accuracy. This is driven by its high censoring rates: >17% even in the low-motion HCP population. The ultimate consequence of this is the need for longer scans. This reveals that stringent motion scrubbing comes at a real cost, e.g. larger budgets and/or smaller samples. By contrast, more lenient scrubbing is effective and has the opposite effect, ultimately facilitating shorter scans, smaller budgets, and more participants.
Modeling and Analysis Methods:
Exploratory Modeling and Artifact Removal
Methods Development
Motion Correction and Preprocessing 1
Task-Independent and Resting-State Analysis 2
Keywords:
Data analysis
FUNCTIONAL MRI
Statistical Methods
Other - denoising; motion; budget; scrubbing; censoring
1|2Indicates the priority used for review
Provide references using author date format
1. Afyouni, S. (2018), 'Insight and inference for DVARS', NeuroImage, 172, 291-312.
2. Afyouni, S. (2019), 'Effective degrees of freedom of the Pearson's correlation coefficient under autocorrelation', NeuroImage, 199, 609-625.
3. Fair, D. (2020), 'Correction of respiratory artifacts in MRI head motion estimates', NeuroImage, 208, 116400.
4. Phạm, D. Đ. (2023), 'Less is more: balancing noise reduction and data retention in fMRI with data-driven scrubbing', NeuroImage, 270, 119972.
5. Power, J. D. (2012), 'Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion', NeuroImage, 59(3), 2142-2154.
6. Power, J. D. (2019), 'Distinctions among real and apparent respiratory motions in human fMRI data', NeuroImage, 201, 116041.
7. Satterthwaite, T. D. (2019), 'Motion artifact in studies of functional connectivity: Characteristics and mitigation strategies', Human brain mapping, 40(7), 2033-2051.