Poster No:
2313
Submission Type:
Abstract Submission
Authors:
Pratyush Reddy Gaggenapalli1, Mohamed Masoud1, Farfalla Hu1, Sergey Plis1
Institutions:
1Georgia State University, Atlanta, GA
First Author:
Co-Author(s):
Introduction:
Decentralised learning, unlike the centralised approach, grapples with private data distributed across various autonomous sites rather than a singular server. This paradigm shift offers heightened privacy [9] and scalability by harnessing multiple device capabilities, particularly crucial in sensitive sectors like healthcare and neuroimaging. However, challenges arise due to increased message sizes, exacerbated by bandwidth limitations, causing latency spikes in large model exchanges. This predicament intensifies for deep learning models, where greater accuracy demands larger parameter reconciliation across sites. Conventional remedies like sparsity, distillation, and quantization often sacrifice accuracy for reduced parameter sizes. In our study, we employed MeshNet [1-3] figure 1a, an full brain volumetric segmentation[5] model, in its original form without any special compression but controlling number of channels, establishing a distributed learning system. Encouragingly, our outcomes exhibit strides in achieving balanced node training, ensuring accuracy, scalability, and resource optimization.
Methods:
Centralised Gradient Aggregation fig 2a is a groundbreaking approach in the distributed learning landscape, overcoming challenges inherent in decentralised data processing across multiple nodes. Focused on data privacy and computational efficiency, this method transforms the paradigm by enabling nodes to process local data batches, generating gradients that encapsulate unique insights. These gradients converge at a centralised hub, creating a comprehensive model performance overview. This collective narrative guides iterative model updates, fostering collaboration irrespective of geographical or dataset disparities. The synchronised refinement cycle allows nodes to learn not just from local data but from the distributed network's collective wisdom, enhancing individual models and promoting shared learning for holistic progress.The incorporation of Coinstac, alongside the integration of WADB for metric logging, constitutes a pivotal advancement in optimising distributed training environments. Coinstac streamlines communication, reducing overhead, and significantly enhancing training efficiency across multiple nodes. This tool democratises the adoption of distributed training by simplifying fault tolerance and widening accessibility. The integration of WADB for metric fig 2b logging provides a comprehensive monitoring system, ensuring precise tracking of training progress and facilitating informed decision-making. This standardised methodology optimises training efficiency, democratising networked nodes utilisation and shaping a more inclusive and robust machine learning landscape across diverse domains.
Results:
Our experiments, conducted in the Coinstac decentralised simulator[10,11] figure 1a, ensure equitable task distribution among nodes, fostering balanced contributions to model optimization. Logging batch-specific metrics for each node provides valuable insights, enhancing overall training efficiency and node-specific dynamics. Figure 2 Table 1 compares Cross-Entropy Loss for standard and decentralised training of MeshNet and CNN models on the Cifar10 dataset, revealing their convergence and accuracy across epochs, showcasing the effectiveness of our decentralised approach in optimising model performance.

·Figure 1: The architectural diagram outlines MeshNet's neural structure, while logged metrics track training progress across various batch levels. Plots visualise input data characteristics and model-

·Figure 2: The diagram showcases the decentralised learning framework, delineating nodes interconnected for collaborative training. Each node logs metrics for every batch during training, capturing ind
Conclusions:
Our progress in decentralised MeshNet learning signifies significant advancements in balanced node training. Collaborating with Coinstac[10] amplifies our approach, enriching distributed learning strategies. This ongoing project carries immense potential to revolutionise collaborative learning paradigms, guaranteeing both efficiency and scalability in the future.
Modeling and Analysis Methods:
Classification and Predictive Modeling 2
Novel Imaging Acquisition Methods:
Anatomical MRI 1
Keywords:
Machine Learning
MRI
Other - distributed
1|2Indicates the priority used for review
Provide references using author date format
[1] Fedorov, Alex, et al. ”Almost instant brain atlas segmentation for large-scale studies.” arXiv preprint arXiv:1711.00457 (2017).
[2] Yu, F., Koltun, V. (2016), “Multi-scale context aggre-gation by dilated convolutions https://doi.org/10.48550/arXiv.1511.07122
[3] A. Fedorov, J. Johnson, and E. Damaraju et al., “End-to-end learning of brain tissue segmentation from imper-fect labeling,” in IEEE International Joint Conference on Neural Networks (IJCNN), 2017
[4] M. Masoud, G. Pratyush, & S. Plis. (Year),Brainchop:Next Generation Web-Based Neuroimaging Application arXiv:2310.16162,2023
[5] M. Masoud, F. Hu, and S. Plis, ‘Brainchop: In-browser MRI volumetric segmentation and rendering’, Journal of Open Source Software, vol. 8, no. 83, p. 5098, 2023. doi:10.21105/joss.05098
[6] D. C. Van Essen and S. M. Smith et al., “The WU-MinnHuman Connectome Project: an overview,” in Neuroim-age. Elsevier, 2013, vol. 80, p. 62–79
[7] Brett, M. et al. (2023). nipy/nibabel: 5.1.0 (5.1.0).Zenodo https://doi.org/10.5281/zenodo.7795644
[8] NIfTI reader. (2017). In GitHub repositoryGitHub.https://github.com/rii-mango/NIFTI-Reader-JS
[9] Plis, Sergey M., et al. ”COINSTAC: a privacy enabled model and prototype for leveraging and processing de-centralized brain imaging data.” Frontiers in neuro-science 10(2016):365.
[10] Coinstac. In GitHub repository. GitHub.https://github.com/trendscenter/coinstac-computation
[11] Coinstac simulator. In GitHub reposi-tory. GitHub. https://github.com/trendscenter/coinstac-computation#2-send-data-across-local-----remoteexample
[12] MeshNet Distributed learning project. (2023). In GitHub repository. GitHub. https://github.com/neuroneural/meshnetproj.git