Mathematical Biosciences and Engineering (Jan 2024)

Dual uncertainty-guided multi-model pseudo-label learning for semi-supervised medical image segmentation

  • Zhanhong Qiu,
  • Weiyan Gan ,
  • Zhi Yang ,
  • Ran Zhou ,
  • Haitao Gan

DOI
https://doi.org/10.3934/mbe.2024097
Journal volume & issue
Vol. 21, no. 2
pp. 2212 – 2232

Abstract

Read online

Semi-supervised medical image segmentation is currently a highly researched area. Pseudo-label learning is a traditional semi-supervised learning method aimed at acquiring additional knowledge by generating pseudo-labels for unlabeled data. However, this method relies on the quality of pseudo-labels and can lead to an unstable training process due to differences between samples. Additionally, directly generating pseudo-labels from the model itself accelerates noise accumulation, resulting in low-confidence pseudo-labels. To address these issues, we proposed a dual uncertainty-guided multi-model pseudo-label learning framework (DUMM) for semi-supervised medical image segmentation. The framework consisted of two main parts: The first part is a sample selection module based on sample-level uncertainty (SUS), intended to achieve a more stable and smooth training process. The second part is a multi-model pseudo-label generation module based on pixel-level uncertainty (PUM), intended to obtain high-quality pseudo-labels. We conducted a series of experiments on two public medical datasets, ACDC2017 and ISIC2018. Compared to the baseline, we improved the Dice scores by 6.5% and 4.0% over the two datasets, respectively. Furthermore, our results showed a clear advantage over the comparative methods. This validates the feasibility and applicability of our approach.

Keywords