Heliyon (May 2023)
Feature-enhanced adversarial semi-supervised semantic segmentation network for pulmonary embolism annotation
Abstract
This study established a feature-enhanced adversarial semi-supervised semantic segmentation model to automatically annotate pulmonary embolism (PE) lesion areas in computed tomography pulmonary angiogram (CTPA) images. In the current study, all of the PE CTPA image segmentation methods were trained by supervised learning. However, when CTPA images come from different hospitals, the supervised learning models need to be retrained and the images need to be relabeled. Therefore, this study proposed a semi-supervised learning method to make the model applicable to different datasets by the addition of a small number of unlabeled images. By training the model with both labeled and unlabeled images, the accuracy of unlabeled images was improved and the labeling cost was reduced. Our proposed semi-supervised segmentation model included a segmentation network and a discriminator network. We added feature information generated from the encoder of the segmentation network to the discriminator so that it could learn the similarities between the prediction label and ground truth label. The HRNet-based architecture was modified and used as the segmentation network. This HRNet-based architecture could maintain a higher resolution for convolutional operations to improve the prediction of small PE lesion areas. We used a labeled open-source dataset and an unlabeled National Cheng Kung University Hospital (NCKUH) (IRB number: B-ER-108-380) dataset to train the semi-supervised learning model, and the resulting mean intersection over union (mIOU), dice score, and sensitivity reached 0.3510, 0.4854, and 0.4253, respectively, on the NCKUH dataset. Then we fine-tuned and tested the model with a small number of unlabeled PE CTPA images in a dataset from China Medical University Hospital (CMUH) (IRB number: CMUH110-REC3-173). Comparing the results of our semi-supervised model with those of the supervised model, the mIOU, dice score, and sensitivity improved from 0.2344, 0.3325, and 0.3151 to 0.3721, 0.5113, and 0.4967, respectively. In conclusion, our semi-supervised model can improve the accuracy on other datasets and reduce the labor cost of labeling with the use of only a small number of unlabeled images for fine-tuning.