Poster No:
1896
Submission Type:
Abstract Submission
Authors:
Lennart Frahm1, Simon Eickhoff2, Robert Langner3, Veronika Müller3, Theodore Satterthwaite4, Peter Fox5
Institutions:
1Forschungszenturm Jülich, Jülich, North-Rhine Westphalia, 2Institute for Systems Neuroscience, Medical Faculty, Heinrich-Heine University Düsseldorf, Düsseldorf, North Rhine–Westphalia Land, 3Inst. Sys. Neurosci., Medical Faculty, HHU Düsseldorf, Düsseldorf, North Rhine-Westphalia, 4UPenn, Philadelphia, PA, 5The University of Texas Health Science Center at San Antonio, San Antonio, TX
First Author:
Lennart Frahm
Forschungszenturm Jülich
Jülich, North-Rhine Westphalia
Co-Author(s):
Simon Eickhoff
Institute for Systems Neuroscience, Medical Faculty, Heinrich-Heine University Düsseldorf
Düsseldorf, North Rhine–Westphalia Land
Robert Langner
Inst. Sys. Neurosci., Medical Faculty, HHU Düsseldorf
Düsseldorf, North Rhine-Westphalia
Veronika Müller
Inst. Sys. Neurosci., Medical Faculty, HHU Düsseldorf
Düsseldorf, North Rhine-Westphalia
Peter Fox, MD
The University of Texas Health Science Center at San Antonio
San Antonio, TX
Introduction:
The Activation Likelihood Estimation (ALE) algorithm for neuroimaging meta-analysis has frequently been used to delineate differences in convergence of brain activity between cognitive tasks, domains or groups by virtue of meta-analytic contrasts [1,2,3]. Significance of contrasts is determined by comparing the voxel-wise differences between two ALE maps to an empirical null distribution of ALE score differences. This null distribution is approximated by randomly permuting the experiments between the two datasets 10,000 times and calculating difference scores for each permutation. While being statistically sound and often yielding results with high face validity, there have been concerns that this approach is not suitable when comparing two datasets with very different sizes [4], with contrasts being driven by the larger dataset. To address these concerns, we developed a new, balanced meta-analytic contrast algorithm, in which we calculate 5000 individual contrasts based on same-sized subsamples of the original datasets and then average the results. Such "undersampling" approaches are well established in machine learning literature [5] when dealing with imbalanced class predictions, which are conceptually similar to our use case. Here, we compared the original permutation-based contrasts with the newly developed balanced contrasts, using simulated ALE datasets.
Methods:
We simulated datasets with two different sizes, including either 150 or 25 experiments. To manipulate the amount of spatial convergence, 0%, 10%, 20%, 30% or 40% of experiments per dataset were selected to feature a coordinate at a "true location" [6,7]. For each dataset size and amount of convergence we created 100 datasets of experiments whose sample size and number of foci included were randomly sampled from normal distributions similar to what is found in empirical datasets according to the BrainMap database [6,8,9]. We then computed both permutation-based and balanced contrasts for each dataset with 150 experiments against a dataset with 25 experiments. Lastly, we examined the proportion of significant differences at the "true location", averaging over all contrasts with the same amount of convergence per dataset.
Results:
In general, both contrast-algorithms performed reasonably well, reliably detecting large convergence differences independent of dataset size. Both algorithms slightly favored the larger dataset, reporting more contrasts in its favor even for less strong convergence differences. This effect was slightly stronger for the balanced contrasts. The biggest difference between the two algorithms seems to be that the balanced contrast is generally more sensitive, detecting more significant contrasts even if there is only slightly larger convergence in one of the datasets, whereas the unbalanced permutation contrasts would often yield no significant differences in these cases.

·Direction of significant contrasts found when comparing large (n = 150 experiments) datasets with small (n = 25 experiments) datasets.
Conclusions:
In contrast to previous concerns, we found only little evidence that unbalanced permutation-based contrasts are strongly driven by the larger dataset. This is an important finding, retrospectively validating ALE meta-analyses that employed this methodology. The newly developed balanced subsampling contrast also performed well, reliably detecting differences in convergence between the two datasets. However, its moderately higher sensitivity, relative to the traditional approach, comes at the expense of featuring a somewhat stronger bias toward detecting significant differences in favor of larger datasets. At this moment there is no clearly preferred algorithm choice from our point of view. Future research should look into specific use cases for both contrast algorithm and validate both approaches using a large number of real-life ALE datasets.
Modeling and Analysis Methods:
Methods Development 1
Other Methods 2
Keywords:
Other - meta-analysis; activation likelihood estimation
1|2Indicates the priority used for review
Provide references using author date format
1. Ardila, A., Bernal, B., & Rosselli, M. (2018). Executive functions brain system: An activation likelihood estimation meta-analytic study. Archives of Clinical Neuropsychology, 33(4), 379-405.
2. Langner, R., & Eickhoff, S. B. (2013). Sustaining attention to simple tasks: a meta-analytic review of the neural mechanisms of vigilant attention. Psychological bulletin, 139(4), 870
3. Worringer, B., Langner, R., Koch, I., Eickhoff, S. B., Eickhoff, C. R., & Binkofski, F. C. (2019). Common and distinct neural correlates of dual-tasking and task-switching: a meta-analytic review and a neuro-cognitive processing model of human multitasking. Brain Structure and Function, 224, 1845-1869.
4. Xu, A., Larsen, B., Baller, E. B., Scott, J. C., Sharma, V., Adebimpe, A., ... & Satterthwaite, T. D. (2020). Convergent neural representations of experimentally-induced acute pain in healthy volunteers: A large-scale fMRI meta-analysis. Neuroscience & biobehavioral reviews, 112, 300-323.
5. Liu, X. Y., Wu, J., & Zhou, Z. H. (2008). Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(2), 539-550.
6. Eickhoff, S. B., Nichols, T. E., Laird, A. R., Hoffstaedter, F., Amunts, K., Fox, P. T., ... & Eickhoff, C. R. (2016). Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation. Neuroimage, 137, 70-85.
7. Frahm, L., Cieslik, E. C., Hoffstaedter, F., Satterthwaite, T. D., Fox, P. T., Langner, R., & Eickhoff, S. B. (2022). Evaluation of thresholding methods for activation likelihood estimation meta‐analysis via large‐scale simulations. Human brain mapping, 43(13), 3987-3997.
8. Fox, P. T., & Lancaster, J. L. (2002). Mapping context and content: the BrainMap model. In Nature Reviews Neuroscience (Vol. 3, Issue 4, pp. 319–321). https://doi.org/10.1038/nrn789
9. Laird, A. R., Lancaster, J. L., & Fox, P. T. (2005). BrainMap: The Social Evolution of a Human Brain Mapping Database. In Neuroinformatics (Vol. 3, Issue 1, pp. 065–078). https://doi.org/10.1385/ni:3:1:065