Journal of King Saud University: Computer and Information Sciences (Apr 2025)

BMSNet: Brain-inspired mixed ultrasound image segmentation network in cancer diagnosis

  • Chukwuemeka Clinton Atabansi,
  • Jing Nie,
  • Hui Li,
  • Haijun Liu,
  • Xichuan Zhou

DOI
https://doi.org/10.1007/s44443-025-00036-z
Journal volume & issue
Vol. 37, no. 3
pp. 1 – 19

Abstract

Read online

Abstract Accurate segmentation of tumor or organ regions in ultrasound images is essential for cancer diagnosis and treatment planning in biomedical engineering. However, the diversity and noisy nature of ultrasound images pose significant challenges to extracting robust spatial features for precise tumor boundary segmentation. The brain processes visual data hierarchically, adapting attention across multiple scales to capture both fine details and broader context simultaneously. Inspired by this, multi-grained information, including local context and long-range dependencies, is introduced to address the challenges in ultrasound image segmentation and improve performance. Existing methods face limitations in this regard: convolutional networks (ConvNets) excel at capturing fine-grained spatial details but fail to account for long-range dependencies, while self-attention mechanisms focus on global context but struggle with continuous local information extraction. The Mamba architecture strikes a balance between local and global information but is not fully optimal in either domain. To address these shortcomings, BMSNet is proposed as a novel deep learning-based architecture that combines the advantages of ConvNets, self-attention mechanisms, and Mamba, aiming to improve the precision of ultrasound image segmentation. Experiments conducted on six ultrasound datasets demonstrate that BMSNet outperforms existing methods across three key metrics: binary accuracy (ACC), intersection over union (IoU), and dice coefficient (DSC).

Keywords