IEEE Access (Jan 2023)

Cross-Loss Pseudo Labeling for Semi-Supervised Segmentation

  • Seungyeol Lee,
  • Taeho Kim,
  • Jae-Pil Heo

DOI
https://doi.org/10.1109/ACCESS.2023.3312303
Journal volume & issue
Vol. 11
pp. 96761 – 96772

Abstract

Read online

Training semantic segmentation models requires pixel-level annotations, leading to a significant labeling cost in dataset creation. To alleviate this issue, recent research has focused on semi-supervised learning, which utilizes only a small amount of annotation. In this context, pseudo labeling techniques are frequently employed to assign labels to unlabeled data based on the model’s predictions. However, there are fundamental limitations associated with the widespread application of pseudo labeling in this regard. Since pseudo labels are generally determined by the model’s predictions, these labels could be overconfidently assigned even for erroneous predictions, especially when the model has a confirmation bias. We observed that the overconfident prediction tendency of the cross-entropy loss exacerbates this issue, and to address it, we discover the focal loss, known for enabling more reliable confidence estimation, can complement the cross-entropy loss. The cross-entropy loss produces rich labels since it tends to be overconfident. On the other hand, the focal loss provides more conservative confidence, therefore, it produces a smaller number of pseudo labels compared to the cross-entropy. Based on such complementary mechanisms of two loss functions, we propose a simple yet effective pseudo labeling technique, Cross-Loss Pseudo Labeling (CLP), that alleviates the confirmation bias and lack of pseudo label problems. Intuitively, we can mitigate the overconfidence of the cross-entropy with the conservative predictions of the focal loss, while increasing the number of pseudo labels marked by the focal loss based on the cross-entropy. Additionally, CLP also contributes to improving the performance of the tail classes in class-imbalanced datasets through the class bias mitigation effect of the focal loss. In experimental results, our simple CLP improves mIoU by up to +10.4%p compared to a supervised model when only 1/32 true labels are available on PASCAL VOC 2012, and it surpassed the performance of the state-of-the-art methods.

Keywords