The brainlife.io cloud services for human visual-field mapping & population receptive field estimate
Poster No:
2309
Submission Type:
Abstract Submission
Authors:
David Hunt1, Bradley Caron2, Steven O'Riley1, Soichi Hayashi3, Franco Pestilli1
Institutions:
1Indiana University, Bloomington, IN, 2Indiana University Bloomington, Bloomington, IN, 3Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN
First Author:
Co-Author(s):
Introduction:
Using functional magnetic resonance imaging (fMRI) collected during a fixation task, we can measure the human visual cortex and the multitude of retinotopic (visual field) maps within it [7]. The measured fMRI signal can also be used to estimate the properties of the population receptive fields (PRFs) of individual cortical locations within each map [8], providing a quantitative measure of their expected response to certain visual stimuli. These in-vivo measurements are critical for understanding the functional architecture of the human visual system, how it develops, ages and responds to disease onset. These fields require both advanced software skills and knowledge of complex stack of software libraries. Indeed multiple libraries exist to allow estimating retinotopic maps and PRF parameters in living human brains e.g., github.com/kendrickkay/analyzePRF, gru.stanford.edu/doku.php/mrTools/tutorialsprf, github.com/vistalab/vistasoft, or github.com/noahbenson/neuropythy. Yet, the skills necessary to use these software libraries are highly complex, this, in turn, can limit the application of the methods to studies led by investigators committed to learning advanced coding methods.
Our work promotes FAIR principles [9] by developing a series of cloud-services that make visual-fields and PRF mapping automated, accessible, and visualizable on the open-science platform brainlife.io. The services comprise of containerized "Apps" that process MRI data from the raw NIFTI files (for both fMRI and T1-weighted anatomy). Users can upload their own retinotopic data to be automatically processed by these apps, or they can process the various datasets available on OpenNeuro.org, DataLad.org or brainlife.io itself. The easy-to-use interface allows users to take advantage of a free cloud computing infrastructure available at brainlife.io. Finally, brainlife.io generates a full provenance record for the data generated by keeping track of the App types and versions used to estimate visual field maps and PRF parameters.
Our work promotes FAIR principles [9] by developing a series of cloud-services that make visual-fields and PRF mapping automated, accessible, and visualizable on the open-science platform brainlife.io. The services comprise of containerized "Apps" that process MRI data from the raw NIFTI files (for both fMRI and T1-weighted anatomy). Users can upload their own retinotopic data to be automatically processed by these apps, or they can process the various datasets available on OpenNeuro.org, DataLad.org or brainlife.io itself. The easy-to-use interface allows users to take advantage of a free cloud computing infrastructure available at brainlife.io. Finally, brainlife.io generates a full provenance record for the data generated by keeping track of the App types and versions used to estimate visual field maps and PRF parameters.
Methods:
Visual-field maps can be automatically estimated on brainlife.io through several apps; through the method developed by Benson, Kay, et al [2] (https://doi.org/10.25663/brainlife.app.203) utilizing fMRI, the method developed by Benson et al [3] estimated from cortical folding in T1-weighted anatomical data by (https://doi.org/10.25663/brainlife.app.187), as well as a Bayesian method developed by Benson et al [4] that utilizes both the fMRI data and the anatomical data as a prior (https://doi.org/10.25663/brainlife.app.245). fMRI data preprocessing can be performed using the fMRIPrep App (https://doi.org/10.25663/brainlife.app.160) [5] or the Human Connectome Project pipeline (https://doi.org/10.25663/bl.app.82) [6].
Results:
The series of Apps described above are publicly available and integrated within brainlife.io, Figure 1 shows the User Interface that allows accessing the Apps and visualization of the results available at https://brainlife.io/ui/prfview.
Conclusions:
We promote FAIR principles by providing a new series of web-services that allow automated data preprocessing and estimation of human visual field maps and population receptive fields. The work is meant to lower the barriers of entry to computational neuroimaging methods for vision science. Our future work will focus on adding additional Apps that can further analyze the visual field maps and PRFs generated by the currently available Apps.
Higher Cognitive Functions:
Imagery
Modeling and Analysis Methods:
Bayesian Modeling
fMRI Connectivity and Network Modeling 2
Segmentation and Parcellation
Perception, Attention and Motor Behavior:
Perception: Visual 1
Keywords:
Data analysis
FUNCTIONAL MRI
Modeling
Optical Imaging Systems (OIS)
Perception
Segmentation
STRUCTURAL MRI
Vision
1|2Indicates the priority used for review
My abstract is being submitted as a Software Demonstration.
Please indicate below if your study was a "resting state" or "task-activation” study.
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Are you Internal Review Board (IRB) certified? Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.
Was any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Was any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.
Please indicate which methods were used in your research:
For human MRI, what field strength scanner do you use?
Which processing packages did you use for your study?
Provide references using author date format
[2] Benson, Noah C., et al (2018). The Human Connectome Project 7 Tesla retinotopy dataset: Description and population receptive field analysis. Journal of vision, 18(13), 23-23.
[3] Benson, Noah C., et al (2014). Correction of distortion in flattened representations of the cortical surface allows prediction of V1-V3 functional organization from anatomy. PLoS computational biology, 10(3), e1003538.
[4] Benson, Noah C., Winawer, Jonathan (2018). Bayesian analysis of retinotopic maps. Elife 7, e40224.
[5] Esteban, O., (2019). fMRIPrep: a robust preprocessing pipeline for functional MRI. Nature methods, 16(1), 111.
[6] Glasser, MF, et al (2013). The minimal preprocessing pipelines for the Human Connectome Project. Neuroimage 80, 105-24.
[7] Wandell, Brian A., Winawer, Jonathan (2011). Imaging retinotopic maps in the human brain. Vision research 51.7 (2011): 718-737.
[8] Wandell, Brian A., Dumoulin, Serge O., Brewer, Alyssa A (2007). Visual field maps in human cortex. Neuron, 56(2), 366-383.
[9] Wilkinson, M. D., (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific data, 3.
Acknowledgments. This research was supported by NSF OAC-1916518, NSF IIS-1912270, NSF, IIS-1636893, NSF BCS-1734853, NIH 1R01EB029272-01, Google Cloud, a Microsoft Research Award, a Microsoft Investigator Fellowship, the Indiana University Areas of Emergent Research initiative “Learning: Brains, Machines, Children.”