IEEE Access (Jan 2024)
SAM-Att: A Prompt-Free SAM-Related Model With an Attention Module for Automatic Segmentation of the Left Ventricle in Echocardiography
Abstract
Studying the structure and function of the heart through the left ventricle is one of the most common methods for diagnosing heart diseases. The automatic segmentation of the left ventricle can be achieved through deep learning techniques, and researchers have conducted a series of explorations in this field. Recently, the segment anything model (SAM) has achieved significant success in the field of natural images, sparking considerable interest among researchers. This has led them to investigate whether SAM can also be successfully applied in the medical imaging domain. The SAM model’s interactive interface enables zero-shot and few-shot learning in the natural image domain, achieving accurate segmentation tasks. However, there are certain limitations in the automatic segmentation of medical images, specifically in the context of natural image cues such as points, boxes, and text prompts. To address this issue, this paper explores the performance of a prompt-free SAM-related model with an attention module for automatic segmentation of the left ventricle in echocardiography, named as SAM-Att. The model employs a low-rank fine-tuning strategy in the upstream, introduces an attention mechanism in the downstream, and successfully accomplishes the automatic segmentation task of the left ventricle with the support of weight files pretrained on the SAM large model. The SAM-Att model achieves dice similarity coefficient (DSC) of 94.49% and hausdorff distance (HD) of 3.505 mm on the test set. The accuracy reaches 98.83%, with precision of 93.65% and recall of 94.77%. A performance comparison of SAM-Att with other SAM-related models (SAM-b, MSA, Sam-CNN, AutoSAM, SAMed) is conducted on the same echocardiography dataset. The results indicate that the left ventricle automatic segmentation achieved the best performance when using SAM-Att.
Keywords