IEEE Access (Jan 2022)

Automated Annotator Variability Inspection for Biomedical Image Segmentation

  • Marcel P. Schilling,
  • Tim Scherr,
  • Friedrich R. Munke,
  • Oliver Neumann,
  • Mark Schutera,
  • Ralf Mikut,
  • Markus Reischl

DOI
https://doi.org/10.1109/ACCESS.2022.3140378
Journal volume & issue
Vol. 10
pp. 2753 – 2765

Abstract

Read online

Supervised deep learning approaches for automated diagnosis support require datasets annotated by experts. Intra-annotator variability of a single annotator and inter-annotator variability between annotators can affect the quality of the diagnosis support. As medical experts will always differ in annotation details, quantitative studies concerning the annotation quality are of particular interest. A consistent and noise-free annotation of large-scale datasets by, for example, dermatologists or pathologists is a current challenge. Hence, methods are needed to automatically inspect annotations in datasets. In this paper, we categorize annotation noise in image segmentation tasks, present methods to simulate annotation noise, and examine the impact on the segmentation quality. Two novel automated methods to identify intra-annotator and inter-annotator inconsistencies based on uncertainty-aware deep neural networks are proposed. We demonstrate the benefits of our automated inspection methods such as focused re-inspection of noisy annotations or the detection of generally different annotation styles using the biomedical ISIC 2017 Melanoma image segmentation dataset.

Keywords