Developing a secure, browser-based and interactive image segmentation system for medical images

Poster No:

2000 

Submission Type:

Abstract Submission 

Authors:

Thuy Dao1, Xincheng Ye1, Chris Rorden2, Korbinian Eckstein1, Daniel Haehn3, Shruti Varade3, Steffen Bollmann1

Institutions:

1University of Queensland, Brisbane, Queensland, 2University of South Carolina, Columbia, SC, 3University of Massachusetts Boston, Boston, MA

First Author:

Thuy Dao  
University of Queensland
Brisbane, Queensland

Co-Author(s):

Xincheng Ye  
University of Queensland
Brisbane, Queensland
Chris Rorden  
University of South Carolina
Columbia, SC
Korbinian Eckstein  
University of Queensland
Brisbane, Queensland
Daniel Haehn  
University of Massachusetts Boston
Boston, MA
Shruti Varade  
University of Massachusetts Boston
Boston, MA
Steffen Bollmann  
University of Queensland
Brisbane, Queensland

Introduction:

The clinical diagnostic process involving manually delineating regions of interest (ROI) is a time-consuming endeavor. Moreover, it requires a significant level of domain expertise for the precise interpretation and segmentation of pathologies. Advancements in deep learning (DL) have the potential to address these challenges by enabling the automatic extraction of meaningful insights from medical imaging data (Liyanage et al., 2019). Notably, the Segment Anything model (SAM)(Kirillov et al., 2023) has demonstrated its capability in zero-shot segmentation. Such an approach could accelerate the demanding segmentation task for clinicians.
However, the translation of such models to clinical applications is hindered due to patient privacy considerations, complex software setups and limited hardware resources. Moreover, existing tools present limitations, some necessitate server components to run DL models, others require complex software installations, and some are limited in their support for image formats, which impede their widespread deployment (Aljabri et al., 2022; Gorman et al., 2023; Masoud et al., 2023).
Therefore, we aim to develop a zero footprint, user-friendly, interactive and secure browser-based deployment of DL models supporting various medical imaging formats for clinical environments. This abstract showcases a proof-of-concept with its core features: visualization, lesion detection with a fine-tuned SAM and refining annotation (https://iishiishii.github.io/deepsyence/).

Methods:

To address the challenge with diverse medical image formats, the proposed platform utilizes the NiiVue package (Niivue, n.d.) for a versatile viewing experience. This platform caters various formats including voxel-based, mesh-based,mesh overlay, tractography and DICOM across all major browsers. Upon image upload, users access various functions for visualization and processing, offering an intuitive environment to interact with medical images.
The automated annotation process relies on the SAM model, which has been fine-tuned for a stroke lesion annotation task. This serves as a blueprint for generalization across various models aimed at similar annotation tasks in medical imaging. To ensure the interoperability of the platform for models trained using different frameworks, we leverage the Open Neural Network Exchange (ONNX)(ONNX: Open Neural Network Exchange, n.d.), a model representation format. The model is converted to the ONNX format and executed via ONNX Runtime Web, a lightweight JavaScript library that enables the execution of ONNX models locally within web browsers. ONNX Runtime Web is resource-efficient and supports a wide range of hardware, including CPUs, GPUs, and TPUs, which makes it adaptable to various setups. The only software dependency on the clinician's computer is a web browser. The computing is performed in parallel in the browser using Web Worker threads to enable a responsive user interface. Crucially, image data stays local in the browser sandbox and all computing is performed client-side, setting it apart from the prevalent server-side embedding seen in most available tools. This edge-computing approach ensures data privacy and enables the processing of medical imaging datasets - even behind hospital firewalls without requiring a complex setup.

Results:

The proof-of-concept application implements a fine-tuned SAM to segment brain lesions within medical images that runs fully client-side (see Figure 1).
The automatic annotation process consists of two steps: encoding the image and selecting the ROI. Once encoded, the segmentation can be done interactively on the image. These segmented ROI are displayed as a starting point for manual refinement.

Conclusions:

The goal of this project is an open-source platform supporting DL models without complex installation, fostering collaboration among institutions. We are working on improving the execution concurrency to accelerate the runtime and integrating more models to enable various segmentation workflows.

Modeling and Analysis Methods:

Classification and Predictive Modeling
Image Registration and Computational Anatomy 2
Segmentation and Parcellation 1

Keywords:

Computational Neuroscience
Computing
Open-Source Code
Open-Source Software
Workflows

1|2Indicates the priority used for review
Supporting Image: ohbm2024.png
 

Provide references using author date format

Aljabri, M. (2022). Towards a better understanding of annotation tools for medical imaging: A survey. Multimedia Tools and Applications, 81(18), 25877–25911.
Gorman, C. (2023). Interoperable slide microscopy viewer and annotation tool for imaging data science and computational pathology. Nature Communications, 14(1), 1572.
Kirillov, A. (2023). Segment Anything (arXiv:2304.02643). arXiv.
Liyanage, H. (2019). Artificial Intelligence in Primary Health Care: Perceptions, Issues, and Challenges: Primary Health Care Informatics Working Group Contribution to the Yearbook of Medical Informatics 2019. Yearbook of Medical Informatics, 28(01), 041–046.
Masoud, M. (2023). Brainchop: In-browser MRI volumetric segmentation and rendering. Journal of Open Source Software, 8(83), 5098.
Niivue. (n.d.). https://github.com/niivue/niivue
ONNX: Open Neural Network Exchange. (n.d.). https://github.com/onnx/onnx