Journal of Clinical Medicine (Apr 2023)

Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs

  • Martha Büttner,
  • Lisa Schneider,
  • Aleksander Krasowski,
  • Joachim Krois,
  • Ben Feldberg,
  • Falk Schwendicke

DOI
https://doi.org/10.3390/jcm12093058
Journal volume & issue
Vol. 12, no. 9
p. 3058

Abstract

Read online

Supervised deep learning requires labelled data. On medical images, data is often labelled inconsistently (e.g., too large) with varying accuracies. We aimed to assess the impact of such label noise on dental calculus detection on bitewing radiographs. On 2584 bitewings calculus was accurately labeled using bounding boxes (BBs) and artificially increased and decreased stepwise, resulting in 30 consistently and 9 inconsistently noisy datasets. An object detection network (YOLOv5) was trained on each dataset and evaluated on noisy and accurate test data. Training on accurately labeled data yielded an mAP50: 0.77 (SD: 0.01). When trained on consistently too small BBs model performance significantly decreased on accurate and noisy test data. Model performance trained on consistently too large BBs decreased immediately on accurate test data (e.g., 200% BBs: mAP50: 0.24; SD: 0.05; p p p p < 0.05). In conclusion, accurate predictions need accurate labeled data in the training process. Testing on noisy data may disguise the effects of noisy training data. Researchers should be aware of the relevance of accurately annotated data, especially when testing model performances.

Keywords