Poster No:
1687
Submission Type:
Abstract Submission
Authors:
Joshua Dean1, Daniel Handwerker1, Paul Taylor1, Peter Lauren1, Daniel Glen1, Peter Bandettini1
Institutions:
1National Institute of Mental Health, Bethesda, MD
First Author:
Joshua Dean
National Institute of Mental Health
Bethesda, MD
Co-Author(s):
Paul Taylor
National Institute of Mental Health
Bethesda, MD
Daniel Glen
National Institute of Mental Health
Bethesda, MD
Introduction:
Physiological regressors are useful for denoising fMRI data, as the fMRI signal contains physiological (non-neuronal) sources, such as respiration and cardiac pulsations [1,2]. The efficacy of the regressors is directly related to how well these fluctuations are characterized. Multiple public and in-house programs generate physiological regressors from the data, but methods to identify and fix errors are tedious, rarely done, and never reported. AFNI programs have been developed in step with the growth of quality control (QC) of fMRI data [3]. AFNI's [4] new physio_calc.py estimates respiration volume per time (RVT) [2], respiratory, and cardiac regressors. This program includes a manual correction option and generates QC reports automatically. We evaluated the quality of physio_calc.py's results in real respiratory and cardiac traces in an effort to improve physio_calc.py's algorithms and interactive QC tools. We also highlight common corrections to benefit other researchers and to provide feedback to tool developers.
Methods:
24 healthy participants performed 2 cued-breathing tasks and 1 event-related audiovisual task during a multi-echo fMRI acquisition, typically completing 2-3 runs per task. Complete task descriptions can be found here (https://github.com/nimh-sfim/ComplexMultiEcho1) and were conducted for a more extensive multi-echo fMRI study. Respiration and cardiac traces were acquired using a respiratory belt and pulse oximeter, respectively. In total, 147 respiratory and 160 cardiac traces were analyzed. RVT and cardiac regressors were generated per run using physio_calc.py, which was also used for manual inspection and correction of each run. We report on our manual review of the traces, identifying and removing artifactual peaks and troughs, with both referred to as extrema.
Results:
When reviewing the estimated respiratory trace's extrema, we removed an average of 7.56 extrema, added 5.13 extrema, and adjusted in temporal position 0.38 extrema per run (Fig. 1A). 5.95% of the total extrema required editing, but 50% of the total changes were performed in just 10 of the 147 respiratory traces. Most respiratory datasets required minimal editing, and 39 datasets required no editing (Fig. 1B).
Fig. 2A shows a portion of a representative good quality respiratory trace without artifactual extrema. Fig. 2B depicts examples of artifactual extrema encountered in respiratory traces, which required manual corrections. The respiratory trace required more manual changes than the cardiac trace. For the cardiac traces, the most prevalent issue encountered by physio_calc.py was noisy extrema, likely due to finger motion lasting for a few seconds (Fig. 2C). Across all cardiac datasets, 0.60 ± 1.14 occurrences of disruptive extrema were observed per run, with 13 out of 160 runs contributing to 50% of the changes and 108 runs requiring no changes. Because these noisy extrema prevent true peak detection, functionality is currently being added to physio_calc.py to interpolate peak times or censor noisy parts of oximetry traces. Including time taken to manually edit extrema, generation of RVT and cardiac regressors for all 307 traces were completed in 6.08 hours (1.2 min/trace), with editing contributing to less than half of that time.


Conclusions:
It is necessary to check the quality of all physiologic traces to appropriately model physiological fluctuations. Bad data, such as extrema due to finger motion with pulse oximetry, will cause problems for many algorithms and benefit from human inspection and correction. Efficient methods for interactive fixing of issues are necessary. Already, this QC evaluation has led to improvements in physio_calc.py's accuracy by identifying types and frequency of errors encountered, and the program's underlying assumptions have been improved. This work is an example of how QC tools (particularly visual ones) provide useful feedback to improve algorithms.
Modeling and Analysis Methods:
Classification and Predictive Modeling
Exploratory Modeling and Artifact Removal 1
Methods Development 2
Keywords:
Computing
Data analysis
Design and Analysis
FUNCTIONAL MRI
Modeling
NORMAL HUMAN
Open Data
Open-Source Software
Workflows
1|2Indicates the priority used for review
Provide references using author date format
[1] G. H. Glover, T.-Q. Li, and D. Ress (2000). “Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR,” Magn. Reson. Med., vol. 44, no. 1, pp. 162–167.
[2] R. M. Birn, J. B. Diamond, M. A. Smith, and P. A. Bandettini (2006). “Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI,” NeuroImage, vol. 31, no. 4, pp. 1536–1548.
[3] P. A. Taylor, D. R. Glen, R. C. Reynolds, A. Basavaraj, D. Moraczewski, and J. A. Etzel (2023). “Editorial: Demonstrating quality control (QC) procedures in fMRI,” Front. Neurosci., vol. 17.
[4] R. W. Cox (1996). “AFNI: software for analysis and visualization of functional magnetic resonance neuroimages,” Comput. Biomed. Res. Int. J., vol. 29, no. 3, pp. 162–173.