Poster No:
2254
Submission Type:
Abstract Submission
Authors:
Ines Gonzalez Pepe1, Yohan Chatelain1, Antoine Hebert2, Tristan Glatard1
Institutions:
1Concordia University, Montreal, Quebec, 2Lace Lithography, Barcelona, Catalonia
First Author:
Co-Author(s):
Introduction:
Numerical uncertainty – the quantification of errors in computer programs – is crucial for user confidence in application outputs. In neuroimaging, amidst a reproducibility crisis, ensuring research reliability is paramount for trusted decisions. Computational neuroimaging tools, including the ones relying on deep learning (DL), are prone to numerical uncertainty which can jeopardise the reliability of their results [1-2]. However, in order to assess DL models, numerical uncertainty tools must be scalable, with minimal code modification and overhead. We present Fuzzy PyTorch (FP), a compiled framework for rapid evaluation of numerical uncertainty for DL methods in neuroimaging.
Methods:
Numerical uncertainty originates in the numerical errors created by floating-point arithmetic models. We employ Monte Carlo Arithmetic (MCA) [3], a stochastic arithmetic technique that introduces randomness to assess numerical uncertainty. Verificarlo [4], an MCA implementation, replaces floating-point operations using a clang-based compiler. Despite the overhead introduced by code instrumentation, Verificarlo's multi-threading and parallelization capabilities enable efficient performance. FP is an application of Verificarlo to compile the PyTorch source code in order to obtain a MCA instrumented DL framework to measure numerical uncertainty. As we could not recompile the MKL library used by default in PyTorch, as it is closed source, we built PyTorch with the open source BLAS and LAPACK libraries [5]. We compare FP's performance to that of Verrou [6], another tool that implements MCA through dynamic binary instrumentation, albeit at a much higher computational cost than Verificarlo's compilation approach. Verrou serves as the reference tool as it has already been used to analyse numerical uncertainty in DL models [7-8]. FP allows standard model training and testing while enabling numerical uncertainty experiments. This provides insights into a model's numerical properties at various stages.
Results:
To assess FP, unit tests were designed to validate instrumentation and ensure random, unbiased error simulation. FP was then applied to a convolutional neural network (CNN) pre-trained on the MNIST digit dataset, a stable and well-solved DL problem. We observe slight uncertainty in class probabilities (Fig. 1) of the same magnitude for both FP and Verrou. However, given the small magnitude of the numerical noise, 10-6, it does not filter past the argmax operation which determines the final prediction of the model. This noise within the class probabilities, insignificant for final predictions, aligns with the assumption that MNIST classification is stable due to the quality of the dataset and the simplicity of the task. Table 1 compares FP's slowdown which is half of that of Verrou. FP can be further sped up by employing multi-threading which is not supported by Verrou. Verificarlo also maintains alternative backends than the default one tested (common to both Verificarlo and Verrou) that speed up the instrumentation. DL models are notorious for their consumption of resources. so overhead optimizations ensure the feasibility of conducting numerical uncertainty investigations in a manner that aligns with the computational constraints imposed by DL models.

·Figure 1: Standard Deviation of MNIST Class Probabilities. Left: Standard deviation across 10 MCA samples for Fuzzy PyTorch; Right: Standard deviation of across 10 MCA samples for Verrou PyTorch

·Table 1: Comparison of Fuzzy PyTorch with Verrou PyTorch for Runtime and Slowdown Factor
Conclusions:
FP, an efficient tool for measuring numerical uncertainty in deep learning models, offers simplicity through a user-friendly Docker image. FP is faster than other MCA implementations, open source, and accessible here. Successfully tested on MNIST, it is adaptable to neuroimaging models like FastSurfer and SynthMorph [9-10]. In brief, the analysis of numerical uncertainty in deep learning models is crucial for ensuring the reliability of neuroimaging results, contributing to the robustness and trustworthiness of diagnostic and treatment applications in healthcare. FP facilitates replication and analysis of DL uncertainty, providing a valuable resource for accelerating research in neuroimaging and beyond.
Modeling and Analysis Methods:
Other Methods 2
Neuroinformatics and Data Sharing:
Informatics Other 1
Keywords:
Computational Neuroscience
Computing
Informatics
Machine Learning
Open-Source Code
Open-Source Software
Statistical Methods
Other - Numerical Uncertainty
1|2Indicates the priority used for review
Provide references using author date format
G. Kiar (2021), “Numerical uncertainty in analytical pipelines lead to impactful variability in brain networks,” PLOS One, vol. 16, no. 11, p. e0250755.
A. Salari (2021), “Accurate simulation of operating system updates in neuroimaging using monte-carlo arithmetic,” in Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis: 3rd International Workshop, UNSURE 2021, and 6th International Workshop, PIPPI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings 3. Springer, pp. 14–23.
D. S. Parker (1997), Monte Carlo Arithmetic: Exploiting Randomness in Floating-Point Arithmetic. University of California (Los Angeles). Computer Science Department.
C. Denis (jul. 2016), “Verificarlo: Checking Floating Point Accuracy through Monte Carlo Arithmetic,” in 2016 IEEE 23nd Symposium on Computer Arithmetic (ARITH). Los Alamitos, CA, USA: IEEE Computer Society, pp. 55–62. [Online]. Available: https://doi.ieeecomputersociety.org/10.1109/ARITH.2016.31.
Netlib (2019), LAPACK – Linear Algebra PACKage;. Available from: https://www.netlib.org/lapack/.
F. Févotte (2016), “Verrou: Assessing Floating-Point Accuracy Without Recompiling”.
I. Gonzalez Pepe (2023), “Numerical Uncertainty of Convolutional Neural Networks Inference for Structural Brain MRI Analysis,” in International Workshop on Uncertainty for Safe Utilization of Machine Learning in Medical Imaging. Springer, p. 64–73.
I. Gonzalez Pepe (2022), “Numerical Stability of DeepGOPlus Inference,” arXiv preprint arXiv:2212.06361.
L. Henschel (2020), “Fastsurfer A Fast and Accurate Deep Learning Based Neuroimaging Pipeline,” NeuroImage, vol. 219, p. 117012.
M. Hoffmann (2021), “SynthMorph: Learning Contrast-Invariant Registration Without Acquired Images,” IEEE Transactions on Medical Imaging, vol. 41, no. 3, pp. 543–558.