Complex & Intelligent Systems (Nov 2024)

Segment anything model for few-shot medical image segmentation with domain tuning

  • Weili Shi,
  • Penglong Zhang,
  • Yuqin Li,
  • Zhengang Jiang

DOI
https://doi.org/10.1007/s40747-024-01625-7
Journal volume & issue
Vol. 11, no. 1
pp. 1 – 17

Abstract

Read online

Abstract Medical image segmentation constitutes a crucial step in the analysis of medical images, possessing extensive applications and research significance within the realm of medical research and practice. Convolutional neural network achieved great success in medical image segmentation. However, acquiring large labeled datasets remains unattainable due to the substantial expertise and time required for image labeling, as well as heightened patient privacy concerns. To solve scarce medical image data, we propose a powerful network Domain Tuning SAM for Medical images (DT-SAM). We construct an encoder utilizing a parameter-effective fine-tuning strategy and SAM. This strategy selectively updates a small fraction of the weight increments while preserving the majority of the pre-training weights in the SAM encoder, consequently reducing the required number of training samples. Meanwhile, our approach leverages only SAM encoder structure while incorporating a decoder similar to U-Net decoder structure and redesigning skip connections to concatenate encoder-extracted features, which effectively decode the features extracted by the encoder and preserve edge information. We have conducted comprehensive experiments on three publicly available medical image segmentation datasets. The combined experimental results show that our method can effectively perform few shot medical image segmentation. With just one labeled data, achieving a Dice score of 63.51%, a HD of 17.94 and an IoU score of 73.55% on Heart Task, on Prostate Task, an average Dice score of 46.01%, a HD of 10.25 and an IoU score of 65.92% were achieved, and the Dice, HD, and IoU score reaching 88.67%, 10.63, and 90.19% on BUSI. Remarkably, with few training samples, our method consistently outperforms various based on SAM and CNN.

Keywords