Thumbs up or down: Simple quality assessment tool for physiological signals

Poster No:

1485 

Submission Type:

Abstract Submission 

Authors:

Roza Bayrak1, Richard Song1, Ruogi Yang1, Catie Chang1

Institutions:

1Vanderbilt University, Nashville, TN

First Author:

Roza Bayrak  
Vanderbilt University
Nashville, TN

Co-Author(s):

Richard Song  
Vanderbilt University
Nashville, TN
Ruogi Yang  
Vanderbilt University
Nashville, TN
Catie Chang  
Vanderbilt University
Nashville, TN

Introduction:

Functional MRI measures the BOLD signal as a proxy for neural activity. The BOLD signal is based on changes in blood oxygenation, and therefore is modulated by peripheral physiological factors such as breathing and heart rate in addition to neural activity. While traditionally regarded as noise, systemic physiological processes are frequently shown to be linked with cognitive processes and may contribute valuable information to fMRI studies. Recognizing this, neuroimaging research increasingly draws upon concurrent recordings of peripheral physiology to enhance fMRI analysis. However, the usefulness of physiological data is contingent upon the quality of the recordings as well as expertise in data handling, which can vary significantly.

To address this critical gap, we devised a simple tool for assessing the quality of peripheral physiological recordings. This deep-learning based tool not only ensures data integrity but could describe data quality issues and suggest steps for fixing the data, thereby promising to improve the accuracy and reliability of downstream research. The code and data used in this abstract are publicly available.

Methods:

Classification Tool:
A simple classification pipeline is developed to assess the quality of raw physiological measures (Fig. 1, left), focusing on respiration and cardiac waveforms. Here, we use the data from the HCP Young Adult cohort. Physiological signals in this dataset were collected using a pulse oximeter and a respiration belt, sampled at 400 Hz. These raw waveforms were downsampled by a factor of 4, temporally normalized (zero mean, unit variance) and provided to the neural networks. The network is composed of stacked 1D CNNs (Convolutional Neural Networks) with decreasing feature map sizes at each layer. The models are trained using a learning rate of 0.0001 and a batch size of 2, employing Adam as the optimizer and binary cross-entropy as the loss function. The dataset is divided into training and test sets, employing rotating partitions in a 5-fold cross-validation framework. To prevent overfitting, an early stopping criterion is applied based on the performance on the validation set. Link to repo: github.com/neurdylab/physio_qa_dl

Annotation Tool:
To train our supervised neural networks, we labeled the physiological signals using an in-house annotation tool (github.com/neurdylab/physio_qa_manual), consisting of a matlab-based GUI that enables fast annotations of physiological signals. The tool (1) takes in raw recordings, (2) passes them through initial quality checkpoints (i.e. detecting empty files, clipping), (3) plots full length raw time series, and finally (4) provides the rater (annotater) with visual information for quality inspection and annotation (Fig 1, right). Since manual annotations are cumbersome, we aim to build a fully automated quality assurance method. However, manually assessing the quality of labels is vital in this initial stage to ensure the accuracy of our models.
Supporting Image: Figure-11.png
 

Results:

The models were able to classify the quality of respiration data with 83.35 ± 1.01% and cardiac data with 88.49 ± 1.42% accuracy.

Conclusions:

Here, we provide a simple thumbs up / thumbs down tool that can save several hours of manually vetting physiological recordings. While the current study focuses on respiration and cardiac data, the CNN models may in the future be easily trained to handle different physiological metrics, such as eye tracking and end-tidal CO2. We envision that in future iterations, the tool can be further developed by adding new modules to (1) generate text-based reports detailing specific reasons for why the quality check for a given recording has failed and whether it is fixable (e.g., if a recording is partly usable, or if a simple interpolation algorithm could fix the problem), (2) provide suggestions for fixing the data, and (3) apply the suggested fix and return the corrected data.

Modeling and Analysis Methods:

Classification and Predictive Modeling 1

Novel Imaging Acquisition Methods:

BOLD fMRI

Physiology, Metabolism and Neurotransmission :

Physiology, Metabolism and Neurotransmission Other 2

Keywords:

Data analysis
Machine Learning
Open-Source Code
Other - Peripheral Physiology

1|2Indicates the priority used for review

Provide references using author date format

TBD