Applied Sciences (Jun 2024)

On the Importance of Diversity When Training Deep Learning Segmentation Models with Error-Prone Pseudo-Labels

  • Nana Yang,
  • Charles Rongione,
  • Anne-Laure Jacquemart,
  • Xavier Draye,
  • Christophe De Vleeschouwer

DOI
https://doi.org/10.3390/app14125156
Journal volume & issue
Vol. 14, no. 12
p. 5156

Abstract

Read online

The key to training deep learning (DL) segmentation models lies in the collection of annotated data. The annotation process is, however, generally expensive in human resources. Our paper leverages deep or traditional machine learning methods trained on a small set of manually labeled data to automatically generate pseudo-labels on large datasets, which are then used to train so-called data-reinforced deep learning models. The relevance of the approach is demonstrated in two applicative scenarios that are distinct both in terms of task and pseudo-label generation procedures, enlarging the scope of the outcomes of our study. Our experiments reveal that (i) data reinforcement helps, even with error-prone pseudo-labels, (ii) convolutional neural networks have the capability to regularize their training with respect to labeling errors, and (iii) there is an advantage to increasing diversity when generating the pseudo-labels, either by enriching the manual annotation through accurate annotation of singular samples, or by considering soft pseudo-labels per sample when prior information is available about their certainty.

Keywords