Software Demonstrations
Software Demonstrations
Presentations
BrainSuite Diffusion Pipeline (BDP): Processing tools for diffusion-MRI
Diffusion weighted MRI has uniquely enabled the study of in vivo brain micro-architecture. However, accurate inferences from diffusion weighted images (DWI) require specialized processing. BrainSuite Diffusion Pipeline (BDP) offers an end-to-end processing pipeline for diffusion MRI that includes implementations of established as well as novel processing methods. It allows essential processing steps to correct localized susceptibility-induced geometric distortions, co-register DWI and T1 weighted MRI (T1w-MRI), estimate white matter orientations for tractography and estimate microstructure related quantitative maps. It also includes novel methods including registration-based distortion correction, INVERSION (Inverse contrast Normalization for Very Simple Registration)[2] based T1w-MRI/DWI co-registration and orientation distribution function (ODF) estimators called Funk-Radon and cosine transform (FRACT)[3] and EAP response function optimized (ERFO)[8] ODF, for improved tracking. In addition, BDP seamlessly integrates with BrainSuite's anatomical processing and image analysis tools, namely, Cortical Surface Extraction (CSE), Surface and Volume Registration (SVReg), BrainSuite Statistics toolbox in R (bssr) and graphical interface for visualization. BDP is available as an opensource software for Windows, Linux, MacOS, and is included as part of the BrainSuite BIDS App.[5] An interface is also provided for NiPype.[4]
Presenter
Divya Varadarajan, PhD, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Charlestown, Boston, MAUnited States
Clinica
We present new advances made to Clinica (www.clinica.run), an open source software platform for clinical neuroscience studies. Neuroimaging studies are challenging since they involve several data analysis steps such as image preprocessing, extraction of image-derived features or statistical analysis. The development of machine learning methods for neuroimaging also involves most of these steps. The objective of Clinica is to automate the processing and statistical analysis of neuroimaging data and ease the development of machine learning approaches.
New functionalities have been integrated to Clinica to enable the longitudinal analysis of T1w MRI and PET data, and the development of deep learning classification approaches. Other advances aim to consolidate the platform.
New functionalities have been integrated to Clinica to enable the longitudinal analysis of T1w MRI and PET data, and the development of deep learning classification approaches. Other advances aim to consolidate the platform.
Cloud-Oriented NeuroImaging with BrainForge: Auto Group ICA, Managed Study Integration, and Beyond
Researchers working with contemporary neuroimaging studies often manage large amounts of data that are processed through numerous analysis and preprocessing pipelines. As data from many studies accumulate over time, local storage and analysis becomes burdensome. To solve these issues, many research groups are turning to cloud-based solutions not only for data storage, but also for performing various analysis steps, and obtaining important research results. In this work, we present BrainForge, a research-oriented, BIDS-compliant web platform for the management and analysis of NeuroImaging data.
Presenter
Bradley Baker, MS, Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS) Atlanta, GAUnited States
FMRIPrep: extending the scanner to produce ready-for-analysis fMRI data
Analyses of blood-oxygen-level-dependent (BOLD) data -like other functional magnetic resonance imaging (fMRI) modalities- cannot operate directly on the images reconstructed by the scanner. Researchers have typically addressed this problem by inserting a data "preprocessing" step before analysis. fMRIPrep (Esteban et al., 2018) fulfills such a task with an easy-to-use interface that minimizes user intervention by self-adapting to the input data. The ever-increasing number of fMRIPrep users demonstrates the adequacy of the approach to simplify the neuroimaging workflow while maximizing the transparency and reproducibility of results.
Neuroscout: a web-based platform for flexible re-analysis of naturalistic fMRI datasets
fMRI studies using complex naturalistic stimulation, such as movies or audio narratives, hold great promise to reveal the neural activity underlying dynamic perception. However, this potential is limited by the resource-intensive nature of fMRI analysis, and the difficulty of annotating events in rich, multi-modal stimuli. Consequently, only a small fraction of viable hypotheses are ever tested, even as the number of public datasets increases. Here we present Neuroscout, a platform that harnesses automated feature extraction tools and a web-based analysis builder to enable researchers to flexibly define and test novel statistical models in public fMRI datasets.
Nighres: a python toolbox for high-resolution neuroimaging
With the recent advances of ultra-high field MRI into the neuroscientific and clinical domains, high resolution and multi-dimensional MRI data is more common. Yet, moving into sub-millimeter resolutions and handling quantitative contrasts are major challenges for many image analysis toolboxes, initially designed for conventional 1.5T and 3T imaging data. With this new toolbox, we address specifically these issues. The tools gathered in it cover the main steps of structural image processing, from quantitative parameter reconstruction to laminar cortical depth modeling. It focuses not only on the cerebral cortex, but also provides new tools to investigate the subcortex and the cerebellum. The methods in this toolbox scale well with image resolution, and can handle data at 400 µm routinely.
The brainlife.io cloud services for human visual-field mapping & population receptive field estimate
Using functional magnetic resonance imaging (fMRI) collected during a fixation task, we can measure the human visual cortex and the multitude of retinotopic (visual field) maps within it [7]. The measured fMRI signal can also be used to estimate the properties of the population receptive fields (PRFs) of individual cortical locations within each map [8], providing a quantitative measure of their expected response to certain visual stimuli. These in-vivo measurements are critical for understanding the functional architecture of the human visual system, how it develops, ages and responds to disease onset. These fields require both advanced software skills and knowledge of complex stack of software libraries. Indeed multiple libraries exist to allow estimating retinotopic maps and PRF parameters in living human brains e.g., github.com/kendrickkay/analyzePRF, gru.stanford.edu/doku.php/mrTools/tutorialsprf, github.com/vistalab/vistasoft, or github.com/noahbenson/neuropythy. Yet, the skills necessary to use these software libraries are highly complex, this, in turn, can limit the application of the methods to studies led by investigators committed to learning advanced coding methods.
Our work promotes FAIR principles [9] by developing a series of cloud-services that make visual-fields and PRF mapping automated, accessible, and visualizable on the open-science platform brainlife.io. The services comprise of containerized "Apps" that process MRI data from the raw NIFTI files (for both fMRI and T1-weighted anatomy). Users can upload their own retinotopic data to be automatically processed by these apps, or they can process the various datasets available on OpenNeuro.org, DataLad.org or brainlife.io itself. The easy-to-use interface allows users to take advantage of a free cloud computing infrastructure available at brainlife.io. Finally, brainlife.io generates a full provenance record for the data generated by keeping track of the App types and versions used to estimate visual field maps and PRF parameters.
Our work promotes FAIR principles [9] by developing a series of cloud-services that make visual-fields and PRF mapping automated, accessible, and visualizable on the open-science platform brainlife.io. The services comprise of containerized "Apps" that process MRI data from the raw NIFTI files (for both fMRI and T1-weighted anatomy). Users can upload their own retinotopic data to be automatically processed by these apps, or they can process the various datasets available on OpenNeuro.org, DataLad.org or brainlife.io itself. The easy-to-use interface allows users to take advantage of a free cloud computing infrastructure available at brainlife.io. Finally, brainlife.io generates a full provenance record for the data generated by keeping track of the App types and versions used to estimate visual field maps and PRF parameters.
Presenter
David Hunt, B.A. in Neuroscience, Indiana UniversityIndiana University
Bloomington, IN
United States
QSIPrep: A robust and unified workflow for preprocessing and reconstructing diffusion MRI
Although diffusion-weighted magnetic resonance imaging (dMRI) can take many forms, they all sample q-space in order to characterize water diffusion. Numerous pipelines and software platforms have been built for processing dMRI data, but most work on only a subset of sampling schemes, or implement only parts of the processing workflow. Comparisons across methods are hindered by incompatible software, diverse file formats, and inconsistent naming conventions, among others. Here we introduce QSIPrep, a new processing pipeline for diffusion images that is compatible with virtually all dMRI sampling schemes via a uniform, containerized application. Preprocessing includes denoising, distortion correction, head motion correction, coregistration, and spatial normalization. Individual algorithms from a diverse set of cutting-edge software suites are combined to capitalize upon their complementary strengths. Throughout, QSIPrep provides both visual and quantitative measures of data quality and "glass-box" methods reporting. Together, these features allow for easy implementation of best practices while simultaneously maximizing reproducibility.
EzBIDS: The open cloud service for automated, validated DICOM to BIDS conversion
Over the past several years there has been a concerted effort within the neuroimaging field to organize and standardize imaging data according to the specifications laid out in the Brain Imaging Dataset Structure standard (BIDS; Gorgolewski et al., 2016). Adhering to this standard is of great benefit for data sharing and replication of previous studies. Yet, as of today data BIDSification process is nontrivial and requires a considerable amount of time as well as advanced software skills. Currently, a few dozen open-source projects have developed code to allow convert DICOM to BIDS. However, all these projects require the use of a Linux terminal and programming; no tool currently exists targeting the broader community of potential BIDS standards users with a diverse background spanning from no to limited coding skills. To broaden the reach and adoption of the BIDS standard more mechanisms are necessary that can lower the barrier of entry to data standardization. We present a new cloud computing service that automatically maps DICOM files to the BIDS standard. The service is publicly available at brainlife.io/ezbids but can also be deployed on other resources.
Brainiak Education: User-Friendly Tutorials for Advanced, Computationally-Intensive fMRI Analysis
Advanced brain imaging analysis methods, including multivariate pattern analysis (MVPA), functional connectivity, and functional alignment, have become powerful tools in cognitive neuroscience over the past decade. There now exist multiple software packages that implement some of these techniques. Although these packages are useful for expert practitioners, novice users face a steep learning curve because of the computational skills required. Furthermore, most standard fMRI analysis packages (e.g., AFNI, FSL, SPM) focus primarily on preprocessing and univariate analyses, leaving a gap in how to integrate advanced tools. BrainIAK (brainiak.org) is a newer, open-source Python software package that seamlessly combines several cutting-edge, computationally efficient techniques with other Python packages (e.g., nilearn, scikit-learn) for file handling, visualization, and machine learning, picking up where other packages leave off. As part of efforts to disseminate this package, we have developed user-friendly tutorials and exercises in Jupyter notebook format for learning BrainIAK and advanced fMRI analysis in Python more generally (brainiak.org/tutorials) (Kumar et al., in press). These materials cover cutting-edge techniques including: MVPA (Norman et al., 2006); representational similarity analysis (Kriegeskorte et al., 2008); background connectivity (Al-Aidroos et al., 2012); full correlation matrix analysis (Wang et al., 2015); inter-subject correlation (Hasson et al., 2004); inter-subject functional connectivity (Simony et al., 2016); shared response modeling (Chen et al., 2015); real-time fMRI (deBettencourt et al., 2015); and event segmentation using hidden Markov models (Baldassano et al., 2017). For long running jobs, with large memory consumption, we have provided detailed information on using high-performance computing clusters (HPCs). These notebooks were successfully deployed and have been extensively tested at multiple sites, including advanced fMRI analysis courses at Yale and Princeton and at multiple workshops and hackathons. We hope that these materials become part of a growing pool of open-source software and educational materials for large-scale, reproducible fMRI analysis.
BRAPH 2.0: A Graph Theory Software for the Analysis of Multilayer Brain Connectivity
The brain is a complex network that relies on the interaction between its various regions, known as the connectome (Sporn 2013). In the past decade, the organization of the human connectome has been studied on different imaging modalities such as structural magnetic resonance imaging (MRI), functional MRI (fMRI), positron emission tomography (PET) and electroencephalogram (EEG) data (Bullmore & Bassett 2011). However, the connectomes obtained from these modalities are often analyzed separately using a single network approach, despite growing evidence that they are not independent and often interact with each other in the same subjects (De Domenico 2017; Mandke et al. 2018).
Connectome Mapper 3: a software pipeline for multi-scale connectome mapping of multimodal MR data
Connectome Mapper (CMP) is an open-source software pipeline with a Graphical User Interface (GUI) written in Python. It was historically designed to help researchers in all the organization and processing of raw structural MRI (sMRI) and diffusion MRI (dMRI) data to obtain a hierarchical multi-scale brain parcellation (Cammoun 2012) and its corresponding structural connectomes (Daducci 2012). While the first two versions were designed with ease-of-use, modularity, configurability, re-executability and transparency in mind, they have shown to be limited in terms of interoperability, reusability, portability, and reproducibility. Following recent advances in the standardization of neuroimaging data organization (Gorgolewski 2016) and processing (Gorgolewski 2017), we present the third version of CMP (CMP3). It has massively evolved in terms of the underlying code, the processing tools and the scope of functionality provided, being extended to the processing of resting-state fMRI (rfMRI) data.
Presenter
Sebastien Tourbier, University Hospital of Lausanne (CHUV) and University of Lausanne (UNIL)Radiology
Lausanne, Vaud
Switzerland
DPABISurf V1.3: An Updated Surface-Based Resting-State fMRI Data Analysis Toolbox
DPABISurf V1.3 is an updated surface-based resting-state fMRI data analysis toolbox evolved from DPABI/DPARSF, as easy-to-use as DPABI/DPARSF. DPABISurf is based on fMRIPprep 1.5.0 (Esteban et al., 2018) (RRID:SCR_016216), FreeSurfer 6.0.1 (Dale et al., 1999) (RRID:SCR_001847), ANTs 2.2.0 (Avants et al., 2008) (RRID:SCR_004757), FSL 5.0.9 (Jenkinson et al., 2002) (RRID:SCR_002823), AFNI 20160207 (Cox, 1996) (RRID:SCR_005927), SPM12 (Ashburner, 2012) (RRID:SCR_007037), PALM alpha112 (Winkler et al., 2016), GNU Parallel (Tange, 2011), MATLAB (The MathWorks Inc., Natick, MA, US) (RRID:SCR_001622), Docker (https://docker.com) (RRID:SCR_016445), and DPABI V4.3 (Yan et al., 2016) (RRID:SCR_010501). DPABISurf provides user-friendly graphical user interface (GUI) for pipeline surface-based preprocessing, statistical analyses and results viewing, while requires no programming/scripting skills from the users.
The brainlife.io cloud-services for functional network neuroscience
Using functional magnetic resonance imaging (fMRI), we can measure the brain's distributed functional organization. Maps of fMRI activity can be used to create functional networks, which in turn can be analyzed using the tools of network science to uncover brain-wide properties such as functional community organization [10] or hub-like structure [6].
The field of Network Neuroscience exists at the intersection of human brain mappers and network science practitioners. These fields require both advanced software skills and mathematical knowledge. Whereas on the one hand, fMRI specialists learn to employ highly specialized image processing techniques and must make sure their data is artifact-free, on the other hand, network scientists focus on learning and developing innovative network science algorithms applicable across fields. To achieve expert-level knowledge in both domains is both a challenge and a barrier for investigators and trainees in either field.
Our work promotes FAIR principles [9] by addressing the challenges highlighted above. We present a series of cloud computing services that make network neuroscience more accessible by enabling the generation of functional brain networks in a streamlined and intuitive manner. The services comprise of containerized "Apps" that process MRI data from the raw NIFTI files (for both fMRI and T1-weighted anatomy) to node-by-node functional connectivity matrices. These services can be run automatically on the various datasets available on OpenNeuro.org, BIDS data hoster on DataLad.org, or on user-uploaded data via a point-and-click web-interface. The interface allows users to take advantage of a powerful distributed cloud computing infrastructure via brainlife.io. Finally, brainlife.io generates a full provenance record for the data generated by keeping track of Apps (and versions) used to build the brain networks, supporting the aim of computational reproducibility [5].
The field of Network Neuroscience exists at the intersection of human brain mappers and network science practitioners. These fields require both advanced software skills and mathematical knowledge. Whereas on the one hand, fMRI specialists learn to employ highly specialized image processing techniques and must make sure their data is artifact-free, on the other hand, network scientists focus on learning and developing innovative network science algorithms applicable across fields. To achieve expert-level knowledge in both domains is both a challenge and a barrier for investigators and trainees in either field.
Our work promotes FAIR principles [9] by addressing the challenges highlighted above. We present a series of cloud computing services that make network neuroscience more accessible by enabling the generation of functional brain networks in a streamlined and intuitive manner. The services comprise of containerized "Apps" that process MRI data from the raw NIFTI files (for both fMRI and T1-weighted anatomy) to node-by-node functional connectivity matrices. These services can be run automatically on the various datasets available on OpenNeuro.org, BIDS data hoster on DataLad.org, or on user-uploaded data via a point-and-click web-interface. The interface allows users to take advantage of a powerful distributed cloud computing infrastructure via brainlife.io. Finally, brainlife.io generates a full provenance record for the data generated by keeping track of Apps (and versions) used to build the brain networks, supporting the aim of computational reproducibility [5].
Presenter
Joshua Faskowitz, Indiana UniversityPsychological and Brain Sciences
Bloomington, IN
United States
PyNets: Reproducible Ensemble Graph Analysis of Functional and Structural Connectomes
Connectomics remains a nascent subfield of neuroscience with a constantly evolving set of methods. Although connectomes may afford us the ability to study fine-grained, high-dimensional individual differences, that gain would seem to come at additional costs to reproducibility (that we cannot afford). More specifically, in estimating a connectome from neuroimaging data, the researcher is forced to make many, often arbitrary methodological choices (e.g. parcellation scheme(s), connectivity model(s), tractography step size(s), etc.) that can greatly influence a network's configuration downstream. Because of the added model uncertainty that results, these untuned hyperparameters amount to a combinatorial explosion of 'hidden' researcher degrees of freedom that can easily distort statistical inference. Although perhaps previously thought to be an intractable problem, PyNets aims to directly address this methodological gap by offering a powerful computational framework for modeling individual structural and/or functional connectomes iteratively and with hyperparameter optimization.
VB_toolbox: A tool for investigating neural feature gradients in Python and MATLAB
There has been an increasing interest in "gradient analysis". Although the technique has been used in the neuroimaging literature since Johansen-Berg et al. (2004), a recent surge in interest occurred when Margulies et al. (2016) embedded the default mode network within a gradient of macroscopic cortical organisation. Gradient analyses in the literature rely on spectral graph theory and the eigendecomposition of the graph laplacian. The second smallest eigenpair of this matrix represents the principal gradient of similarity. In this abstract we introduce a new toolbox built in Python and MATLAB for carrying out gradient analysis using a simple command line interface. The toolbox performs gradient analyses on cortical surfaces and is able to perform them at a whole brain level, using ROI approaches, or a searchlight across the cortex (Kriegeskorte et al. 2006).
MyPLS 2.0 - Partial least squares analysis for multivariate brain-behavior associations
Unsupervised learning methods such as Partial Least Squares (PLS) can allow to overcome the limitations that arise with classification when classes are not well defined. PLS is a data-driven multivariate statistical technique that aims to extract relationships between two data matrices (McIntosh et al., 2004). PLS has previously been used to link neural variability with age (Garrett et al., 2010), or atrophy to symptoms in Parkinson's disease (Zeighami et al., 2019).
Here, we present a toolbox that deploys Behavior PLS, which aims to maximize the covariance between neuroimaging and behavioral data by deriving latent components (LCs) that are optimally weighted linear combinations of the original variables.
Here, we present a toolbox that deploys Behavior PLS, which aims to maximize the covariance between neuroimaging and behavioral data by deriving latent components (LCs) that are optimally weighted linear combinations of the original variables.
Presenter
Daniela Zöller, Ecole Polytechnique Fédérale de Lausanne (EPFL) and University of Geneva GenevaSwitzerland
Nilearn and Nistats: Machine learning and statistics for fMRI in Python
Efficient and reproducible science depends on a strong software ecosystem [1]. We present Nilearn and Nistats, two Python packages empowering the neuroimaging community, which will soon be united in the same library. Nilearn (https://nilearn.github.io) focuses on fast and easy statistical learning on fMRI data. It provides efficient and reliable implementations of machine learning methods tailored to the needs of the neuroimaging community. It builds upon a Python "data science ecosystem" of packages such as numpy [2], scipy [3], scikit-learn [4], and pandas [5], that are extensively used, tested and optimized by a large scientific and industrial community. This makes Nilearn easy to use for a broad spectrum of researchers who are familiar with the Python ecosystem, and reduces the need of learning the idiosyncrasies of specific command line or GUI-based neuroimaging tools. Specifically, Nilearn provides methods for decoding functional connectivity analysis, and biomarker extraction. It also includes datasets for teaching, as well as interactive visualization of brain images and connectomes.
Nistats provides tools for mass univariate linear models –standard analysis in fMRI– https://nistats.github.io. It will eventually become a part of Nilearn. Both libraries have been widely used, taught, and maintained by the neuroimaging community for many years. They build upon and contribute to the growing ecosystem of Python tools for neuroimaging, with tools such as nibabel [6] and dipy [7].
Nistats provides tools for mass univariate linear models –standard analysis in fMRI– https://nistats.github.io. It will eventually become a part of Nilearn. Both libraries have been widely used, taught, and maintained by the neuroimaging community for many years. They build upon and contribute to the growing ecosystem of Python tools for neuroimaging, with tools such as nibabel [6] and dipy [7].
The BrainSuite Statistics Toolbox in R (bssr)
The BrainSuite Statistics toolbox in R (bssr) is a software package developed in R that performs statistical analysis of population-level neuroimaging data processed using BrainSuite [1]. Specifically, it provides statistical tools for conducting cortical thickness analysis, tensor based morphometry, and analysis of diffusion measures.
E-COBIDAS: a webapp to improve neuroimaging methods and results reporting
In any scientific study, a complete and precise method section is necessary to understand and evaluate the results, plan replications, seed new research and compare outcomes across studies. However, a large number of neuroimaging studies fail to report important details necessary for independent investigators to achieve these goals [1].
In order to improve this situation, the Committee on Best Practices in Data Analysis and Sharing (COBIDAS) of OHBM released a report to establish a set of best practices for methods and results reporting in f/MRI research [4]. This has been recently followed by a similar initiative for EEG and MEG [5].
The COBIDAS reports are accompanied by checklist tables meant to help authors ensure that their methods and results description complies with the Committee's recommendations. These checklists are a valuable resource, but the static PDF tables do not lend themselves to an actionable format, and users may be required to scroll through pages of items that do not concern their specific use-case. Moreover, while these checklists help generate a human readable method section, they do not provide ways to create a machine-readable equivalent that would encapsulate a large part of the metadata in a study.
The goal of the eCOBIDAS webapp is to provide a user-friendly solution for researchers to fill out the COBIDAS checklists, while generating a machine-readable summary of a methods section. This summary can then be used to automate methods section writing for authors, or to facilitate the assessment of a study quality during peer-review.
In order to improve this situation, the Committee on Best Practices in Data Analysis and Sharing (COBIDAS) of OHBM released a report to establish a set of best practices for methods and results reporting in f/MRI research [4]. This has been recently followed by a similar initiative for EEG and MEG [5].
The COBIDAS reports are accompanied by checklist tables meant to help authors ensure that their methods and results description complies with the Committee's recommendations. These checklists are a valuable resource, but the static PDF tables do not lend themselves to an actionable format, and users may be required to scroll through pages of items that do not concern their specific use-case. Moreover, while these checklists help generate a human readable method section, they do not provide ways to create a machine-readable equivalent that would encapsulate a large part of the metadata in a study.
The goal of the eCOBIDAS webapp is to provide a user-friendly solution for researchers to fill out the COBIDAS checklists, while generating a machine-readable summary of a methods section. This summary can then be used to automate methods section writing for authors, or to facilitate the assessment of a study quality during peer-review.
Presenter
remi gau, Institute of Psychology, Université Catholique de Louvain Louvain la neuve, WallonieBelgium
BEST Toolbox: Brain Electrophysiological recording & STimulation Toolbox
Non-invasive brain stimulation (NIBS) experiments involve many standard procedures that are nonetheless not sufficiently standardized in the community. Transcranial magnetic stimulation (TMS) protocols usually require motor hotspot search, motor threshold hunting, motor evoked potential (MEP) and TMS-evoked EEG potential (TEP) measurements, estimation of stimulus-response curves, paired-pulse TMS, rTMS intervention protocols, etc., and since recently also real-time EEG-triggered stimulation. Given the diversity in application and experience of the experimenter, standardized, automated, and yet flexible, data collection and analysis tools are needed. Here, we introduce the Brain Electrophysiological recording and STimulation (BEST) Toolbox, a MATLAB based open source software that interfaces with a wide variety of EEG, EMG, and TMS devices, and allows to run flexibly configured but fully automated closed-loop protocols. The BEST toolbox provides a software framework for brain stimulation studies, including real-time closed-loop applications. It is powered by state-of-the-art signal processing algorithms, combined with an easy to use graphical user interface (GUI) in order to facilitate data collection, live and interactive data analyses and visualization, data sharing, study comparison and replication, student training, and open science.
Framework for performing multi-subject analysis in electrophysiology within the BIDS format
Neuroscience community has faced the challenge of reusing scripts to analyse data coming from different projects or different centers, however analysing large cohorts of data is very important to increase statistical power. Brain Imaging Data Structure (BIDS) has been developed, in part, to overcome this problem. BIDS allows to organize and share data easily [1]. Originally developed for neuroimaging, it has been extended to different modalities including intracranial-electroencephalography (iEEG) [2]. Structuring data in BIDS thus allows combining many subjects in a same analysis, in either single-center or multicenter studies. However, converting existing databases in this structure can be time consuming and transferring data from different centers can be prone to errors. Moreover, even if many software are now adapted to BIDS (e.g. Brainstorm) and BIDS App has been developed to standardize data analysis [3], these software solutions focused on anatomical data. Our goal was thus to develop tools for transferring and organising data (both electrophysiological and anatomical) from different centers in the BIDS format, and to launch automated analyses on several subjects with common criteria.
Incorporating quantitative EEG analysis into the MNI Open Science neuroinformatics ecosystem
Revived interest in electrophysiology, driven by the maturity of EEG source imaging, has led to new informatics challenges (7). Integrating sophisticated EEG analysis with high-performance computing is pivotal to promulgating standardized methods across research and clinical settings (1).
In response, a collaboration from the Cuban Neuroscience Center (CNEURO), the University of Electronic Science and Technology of China (UESTC) and the Montreal Neurological Institute (MNI) is incorporating CNEURO's quantitative EEG methods into the MNI Open Neuroscience ecosystem, based on the LORIS and CBRAIN data- and tool-sharing platforms (3).
CNEURO's Quantitative EEG toolbox (qEEGt) was recently released via CBRAIN (9). Its VARETA source imaging method (2), age regression equations and calculation of z- spectra are published on GitHub and Zenodo. The qEEGt toolbox leverages Bayesian estimation of source localization and connectivity for improved spatial resolution, and produces age-corrected normative SPM maps of EEG log source spectra. Given the impact of SPM across neuroimaging (5), such open-access toolkits hold similar potential for electrophysiology.
In response, a collaboration from the Cuban Neuroscience Center (CNEURO), the University of Electronic Science and Technology of China (UESTC) and the Montreal Neurological Institute (MNI) is incorporating CNEURO's quantitative EEG methods into the MNI Open Neuroscience ecosystem, based on the LORIS and CBRAIN data- and tool-sharing platforms (3).
CNEURO's Quantitative EEG toolbox (qEEGt) was recently released via CBRAIN (9). Its VARETA source imaging method (2), age regression equations and calculation of z- spectra are published on GitHub and Zenodo. The qEEGt toolbox leverages Bayesian estimation of source localization and connectivity for improved spatial resolution, and produces age-corrected normative SPM maps of EEG log source spectra. Given the impact of SPM across neuroimaging (5), such open-access toolkits hold similar potential for electrophysiology.
Physiopy/phys2bids: BIDS formatting of physiological recordings
The BOLD fMRI signal contains multiple subject-dependent sources of physiological origin. This fact can be exploited to capture physiological states (e.g. cerebrovascular reactivity) [1,6], or physiological fluctuations can be treated as noise and removed to improve activation or connectivity mapping [2,4]. In both cases, it is necessary to measure physiological signals (e.g. cardiac pulse, chest volume, exhaled CO2 and O2, skin conductance). It is becoming common practice in the neuroimaging community to share collected data on public platforms that rely on Brain Imaging Data Structure (BIDS [3]). However, due to (I) the high variability in the experimental setup and measurement process, (II) the lack of tools to convert such data into BIDS format, and (III) the lack of consensus guidelines for how to use such data in neuroimaging pipelines, few centres or researchers routinely collect and utilize physiological data and even fewer share them. Here, we introduce the development of physiopy: a user-friendly, community-driven bundle of tools that aim to help researchers collect, share, and prepare physiological data for neuroimaging analysis.
Semi-Automatic SEEG Localization and Interactive Neuroimage Visualization in Epilepsy Patients
Currently, there exist pipelines, such as FreeSurfer [15, 16], or deep learning based models [17, 18], that automatically segment structural MRI images based on an anatomical atlas. From there, manual localization of implanted electrodes have been developed within FieldTrip [19] and img_pipe [20], but are generally optimized mainly for ECoG electrodes. In epilepsy monitoring, more and more patients are being implanted with SEEG depth electrodes, as they provide access to sub-cortical structures and the 3D network of the brain [21, 22, 23]. Currently the open-sourced tools for localizing iEEG is limited in three ways: i) not optimized for SEEG and requires manual localization, ii) running pipelines have a high learning curve, or require very special data structures and iii) do not provide a way for visualization of the SEEG in the context of the 3D brain.
In this work, we developed an open-sourced repository (https://github.com/adam2392/neuroimg_ pipeline) that abstracts automatic segmentation on the structural T1 MRI, semi-automates localization of the SEEG electrodes and visualizes SEEG electrodes within a 3D brain. We validated the accuracy of our spatial localizations with respect to a manual localization, and our anatomical assignments of SEEG electrodes on a cohort of n=40 epilepsy patients.
In this work, we developed an open-sourced repository (https://github.com/adam2392/neuroimg_ pipeline) that abstracts automatic segmentation on the structural T1 MRI, semi-automates localization of the SEEG electrodes and visualizes SEEG electrodes within a 3D brain. We validated the accuracy of our spatial localizations with respect to a manual localization, and our anatomical assignments of SEEG electrodes on a cohort of n=40 epilepsy patients.
SimNIBS 4.0: Detailed Head Modeling for Transcranial Brain Stimulation and EEG
Computational modeling of the electric currents in the cortex is an integral part of many brain mapping approaches [1,2]. The currents can be either externally induced by electric (TES) or magnetic (TMS) stimulation, or due to neuronal activity in which case they can be measured using M/EEG. In both cases the current flow is largely shaped by the individual anatomy [3], which implies that reliable stimulation targeting, or source reconstructions, require accurate anatomical models of the head anatomy. In the new version of the open-source toolbox for Simulation of Non-Invasive Brain Stimulation (SimNIBS 4.0), we have improved the accuracy and robustness of the anatomical modeling, and included multiple additional head tissue classes, such as veins and spongy bone. SimNIBS 4.0 also supports lead-field calculations, which allows the improved head modeling to be integrated into EEG source reconstruction algorithms.
LIONirs toolbox design for fNIRS data analysis.
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging technique. Analogous to functional magnetic resonance imaging (fMRI), it measures changes of cerebral blood oxygenation related to neuronal processes in cortical regions. In order to facilitate fNIRS data analysis, we developed a MATLAB toolbox and we describe here examples of functionalities. The LIONirs toolbox uses the MATLAB Batch System of SPM toolbox and provides a complete visual interface for data inspection and topographical representation during each stage of the analysis. This makes it a user-friendly tool for those researchers that do not have programming skills. We show this property as well as its flexibility and other capabilities by, creating a processing pipeline and applying it to data gathered from a group of 14 healthy subjects when they perform a passive story-listening task.
The NIRS Brain AnalyzIR Toolbox
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low-levels of light (650–900 nm) to measure changes in cerebral blood volume and oxygenation. The lower operation cost, portability, and versatility of this method make it an alternative to methods such as functional magnetic resonance imaging for studies in pediatric and special populations and for studies without the confining limitations of a supine and motionless acquisition setup. However, the analysis of fNIRS data poses several challenges stemming from the unique physics of the technique, the unique statistical properties of data [1], and the growing diversity of non-traditional experimental designs being utilized in studies due to the flexibility of this technology. For these reasons, specific analysis methods for this technology must be developed. In this paper, we introduce the NIRS Brain AnalyzIR toolbox [2] as an open-source Matlab-based analysis package for fNIRS data management, pre-processing, and first- and second-level (i.e., single subject and group-level) statistical analysis.
MNI SISCOM: An Open-Source Tool for Subtraction Ictal Single-photon emission CT Coregistered to MRI
Subtraction ictal single-photon emission CT coregistered to MRI (SISCOM) is a well-established technique for quantitative analysis of ictal (during a seizure) vs interictal (between seizures) SPECT images that can contribute to the identification of the epileptogenic zone in patients with drug‐resistant epilepsy (Ahnlide et al., 2007). However, there is presently a lack of user-friendly free and open-source software to compute SISCOM results from raw SPECT and MRI images. Multi-purpose image processing packages (Penny et al., 2011) already provide tools (e.g. coregistration and image calculators) that allow for the computation of SISCOM images, but obtaining these results typically requires several steps and necessitate a certain level of technical expertise. In this project, we developed an open-source graphical and command-line scriptable application to facilitate the process of computing SISCOM images. The goal of this project is to provide a freely available single-purpose and user-friendly tool to implement SISCOM.
Osprey: Open-Source Processing, Reconstruction & Estimation of Magnetic Resonance Spectroscopy Data
Modern magnetic resonance spectroscopy (MRS) data analysis requires elaborate preprocessing, interfacing with external fitting software, and tissue and relaxation corrections of the results obtained with the external software. Well-resourced labs frequently rely on in-house code for such tasks, but a widely used standardized pipeline is not available. Additionally, the default linear-combination modeling software is an expensive, closed-source, commercial product with limited on-going development. As a result, the entry threshold for new labs looking to apply MRS is high, the methods applied are heterogeneous and often poorly described in the literature, and the future is uncertain.
Here we describe a new MATLAB-based toolbox "Osprey" which streamlines all steps of state-of-the-art pre-processing, linear-combination modeling, tissue correction, quantification, and visualization of MRS data into a single environment. The Osprey framework is designed in a modular way to flexibly adopt new methods and encourage community contribution.
Here we describe a new MATLAB-based toolbox "Osprey" which streamlines all steps of state-of-the-art pre-processing, linear-combination modeling, tissue correction, quantification, and visualization of MRS data into a single environment. The Osprey framework is designed in a modular way to flexibly adopt new methods and encourage community contribution.
Presenter
Georg Oeltzschner, Johns Hopkins UniversityDepartment of Radiology and Radiological Science
Baltimore, MD
United States
OpenNFT: open-source Python/Matlab framework for real-time fMRI neurofeedback and quality assessment
During software demonstration, I will explore the GUI-based multi-processing open-source framework for real-time fMRI neurofeedback training and quality assessment, termed OpenNFT [1,2] (Fig. 1; http://opennft.org/). This framework is based on the platform-independent interpreted programming languages Python and Matlab to facilitate concurrent functionality, high modularity, and the ability to extend the software in Python or Matlab depending on programming preferences, research questions, and clinical application. The core programing engine is Python, which provides larger functionality and flexibility than Matlab. Based on this core, Matlab processes are integrated to add specific functions.
Macapype: An open multi-software framework for non-human primate anatomical MRI processing
Non-human primates (NHP) are increasingly used for cross-species neuroimaging studies, either for anatomical or functional comparison with human. Anatomical MR images are typically segmented in order to define regions of interest for fMRI and diffusion MRI analyses, work on surface reconstruction or localize implanted electrodes for electrophysiology. Although MRI processing is largely standardized in humans, it is still a challenge to define robust processing pipelines for segmentation of NHP anatomical images. Because acquisition parameters and experimental settings are much more variable in NHP than in human studies (size of animals, resolution, field of view, signal-to-noise ratio, availability of T2w images, etc.), there are multiple ways to perform each processing step (see for example Balbastre et al., 2017; Tasserie et al, 2019).
To unify processing of NHP anatomical MRI, we propose Macapype (https://github.com/Macatools/macapype), an open-source framework to create custom pipelines based on Nipype (Gorgolewski et al., 2011).
To unify processing of NHP anatomical MRI, we propose Macapype (https://github.com/Macatools/macapype), an open-source framework to create custom pipelines based on Nipype (Gorgolewski et al., 2011).
Presenter
Bastien Cagna, Institut des Neurosciences de la Timone, Aix-Marseille Université MarseilleFrance
Mapping Cross-Scale Brain Data Using Inter-Atlas Connectivity Transformation (IntACT)
Combining information from neuroimaging, histology and axonal tract tracing data across species allows neuroscientists to gain better understanding of brain structure on macroscopic and mesoscopic scales. We introduce Inter-Atlas Connectivity Transformation (IntACT), a user-friendly tool to combine spatial brain data from different modalities at different scales of resolution.
NiMARE: A Neuroimaging Meta-Analysis Research Environment
Meta-analytic databases like BrainMap, Neurosynth, and NeuroVault have become extremely popular tools for a range of analyses, including coordinate- and image-based meta-analysis, region-of-interest definition, meta-analytic coactivation modeling, meta-analytic parcellation, semantic model development, and quantitative functional decoding. Each of these analyses has been approached in a number of ways across the literature, often accompanied with closed-source code, or even no code at all.
NiMARE (Neuroimaging Meta-Analysis Research Environment) is a Python library that implements a range of meta-analytic tools for neuroimaging data. NiMARE is open source, collaboratively developed, and includes citations for all methods so that the original creators will receive credit in any publications using their method. NiMARE is currently in alpha development, although in the past year the package has developed considerably. Here we describe the latest improvements to NiMARE, including documentation, implemented methods, software improvements, and future directions.
NiMARE (Neuroimaging Meta-Analysis Research Environment) is a Python library that implements a range of meta-analytic tools for neuroimaging data. NiMARE is open source, collaboratively developed, and includes citations for all methods so that the original creators will receive credit in any publications using their method. NiMARE is currently in alpha development, although in the past year the package has developed considerably. Here we describe the latest improvements to NiMARE, including documentation, implemented methods, software improvements, and future directions.
NS+: A new meta-analysis tool to extend the utility of NeuroSynth
A vast amount of human neuroimaging research seeks to understand the functional mapping of brains with forward inference analyses, which show brain activities produced by specific manipulations but do not indicate causal relationships in the opposite direction. NeuroSynth (Yarkoni, et al., 2011) is a tool that aims to address this problem by synthesizing more than 14000 fMRI studies, and automating reverse inference meta-analysis such as mapping activation probabilities (e.g., in Broca's area) given terms of interest (e.g., "language"). However, Neurosynth can be limited in its flexibility: it is easy to get regions of interest given predefined research topics, but not vice versa. Here, we created a new software tool, NS+, which explores research terms given ROIs, and further extends the utility of NeuroSynth-based reverse inference meta-analysis.
AxonDeepSeg: Automatic Myelin and Axon Segmentation Using Deep Learning
Quantitative MRI techniques that can probe tissue microstructure, such as diffusion MRI (NODDI, AxCaliber) (Duval 2016), magnetization transfer (Sled 2018), and myelin water fraction imaging (Alonso-Ortiz 2015) are under constant scrutiny when it comes to biological specificity (e.g. axon diameter, myelin density, g-ratio). One way to properly validate those techniques is histology, whereby a piece of tissue is imaged at the nanometer scale in order to derive statistics about the morphometrics of the cells within an equivalent MRI-voxel (e.g., axon diameter distribution). However, manually segmenting histology images is a time consuming endeavour that is prone to inconsistent labelling by the person doing the job or between multiple labellers. Several axon/myelin segmentation tools have been proposed using conventional image processing techniques (Liu 2011), however they are not flexible to multiple modalities, are not fully automatic, and don't harness the entire information potential of the images. We propose a deep learning approach in order to overcome these limitations, and to offer the neuroscience community a free and open-source alternative for segmenting myelin and axons in their histology slides. We also offer a graphical user interface (GUI) allowing for additional manual corrections if needed.
WikiBS: a public wiki for segmenting high resolution brainstem images
Brainstem contains a large number of white and gray structures involved in almost all central nervous system functions, some of them being additionally potential targets for deep electrical stimulation in various motor and psychiatric diseases. Its fine anatomical study was for long limited to histological methods but recently became accessible to ultrahigh field (UHF) MRI performed ex vivo at a resolution as high as 50 microns (Makris et al. 2019). Reproducible segmentation of such datasets requires a high degree of anatomical knowledge and a precise definition of segmentation rules. The purpose of this work was to propose a practical tool, WikiBS, helping the user to manually delineate major gray matter structures within the human brainstem.
Presenter
François Lechanoine, MD, Service de Neurochirurgie, CHU de Grenoble Grenoble, FranceFrance
TIRL: Automated Non-Linear Registration of Stand-Alone Histological Sections to Whole-Brain MRI
Advanced MRI methods are sensitive to tissue properties at much finer scales than the resolution of a clinical MRI scan. Consequently, it is of great interest what the MRI signal can reveal about the healthy tissue microstructure, and how it is affected by disease. Conversely, most existing biophysical models concern healthy conditions, and radiological findings in patients are rarely followed up by post-mortem histological validation, leaving the radiological and histopathological understanding of human neurodegeneration detached. Bridging the gap requires precise alignment between MRI and histology that so far has been predominantly addressed for serial sections. However, the costs of this technique are prohibitively high to take disease heterogeneity into account by studying a multitude of brains. Here we report a customisable image registration platform to automate the alignment of conventional small-slide histological sections to whole-brain post-mortem MRI as an alternative.
Presenter
Istvan Huszar, MD, Nuffield Department of Clinical Neurosciences Oxford, OxfordshireUnited Kingdom
BrainVR: A Virtual Reality System for Neurology Education
Visualization of complex neuroimaging images such as brain parcellation, functional brain connectivity, diffusion tensor imaging, functional imaging and combination of these techniques is a difficult task because they are 3D structures. Different advanced visualization solutions have been proposed in previous work (Stereoscopy: Rojas, 2014, virtual reality (VR)-based systems to view complex neuroimages: Rojas, 2015; 2016; 2017). Keiriz et al (2018) created a CAVE system to explore graph representations of functional connectivity data. The various visualization solutions have different costs and technical characteristics.
VR is a 3D environment of scenes or objects of real appearance, created by computer technology, which creates the feeling of being immersed in it. This environment is visualized by glasses or VR helmet. Oculus Quest (Oculus VR) is a VR headset, fully standalone, with two controllers.
Here we describe a fully controllable immersive 3D VR system for educational purposes (medical doctors or other healthcare professionals), that shows skull, pial cortex, main subcortical structures (Fischl, 2012), and main brain tracts (Oishi, 2010).
VR is a 3D environment of scenes or objects of real appearance, created by computer technology, which creates the feeling of being immersed in it. This environment is visualized by glasses or VR helmet. Oculus Quest (Oculus VR) is a VR headset, fully standalone, with two controllers.
Here we describe a fully controllable immersive 3D VR system for educational purposes (medical doctors or other healthcare professionals), that shows skull, pial cortex, main subcortical structures (Fischl, 2012), and main brain tracts (Oishi, 2010).
On Visualization and Interpretation of Complex Connectomic Results
Structural and functional connectomes are widely used to characterize differences between individuals and groups (Finn, 2015; Turk, 2019; Hong, 2019). However, visualizing and interpreting these results is challenging, in part due to the large number of connections. Here, we provide a toolkit, as part of the BioImage Suite Web project (https://bioimagesuiteweb.github.io/webapp/connviewer.html), to visualize complex connectome-based results across multiple levels of feature summarization, improving interpretability.