IEEE Access (Jan 2022)

Label Augmentation to Improve Generalization of Deep Learning Semantic Segmentation of Laparoscopic Images

  • Leticia Monasterio-Exposito,
  • Daniel Pizarro,
  • Javier Macias-Guarasa

DOI
https://doi.org/10.1109/ACCESS.2022.3162630
Journal volume & issue
Vol. 10
pp. 37345 – 37359

Abstract

Read online

Training Deep Neural Networks to solve semantic segmentation is a challenging problem with small-size labeled datasets, leading to overfitting. This is especially problematic in medical images, and in particular, in laparoscopic surgery images. In this context, ground-truth segmentation labels are available only for a small set of images with few patients. Besides, inter-patient variability is very high in practice. Models trained for a specific setup and a set of patients usually performs poorly when deployed in a new environment. This work proposes a new training strategy that improves the generalization accuracy of current state-of-the-art semantic segmentation methods applied to laparoscopic images. Our approach is based on training a discriminator network, which learns to detect segmentation errors, producing a dense segmentation error map. Unlike in adversarial networks, we train the discriminator offline by synthetically altering ground-truth segmentation labels with simple morphological and geometric operations. We then use the discriminator to train a segmentation neural network, by minimizing the discriminator predicted error jointly with a standard segmentation loss. This strategy results in segmentation models that are significantly more accurate when tested in unseen images than those only relying on data augmentation. This technique is very suitable to boost the performance of any state-of-the-art segmentation network and can be combined with other data augmentation strategies. This paper evaluates and validates our proposal by training and testing common state-of-the-art segmentation models in publicly available semantic segmentation datasets, specialized in laparoscopic and endoscopic surgery. The results show that our methods are effective, obtaining a significant improvement in terms of segmentation accuracy, especially in challenging small-size datasets.

Keywords