Scientific Reports (Jan 2024)

Task design for crowdsourced glioma cell annotation in microscopy images

  • Svea Schwarze,
  • Nadine S. Schaadt,
  • Viktor M. G. Sobotta,
  • Nicolai Spicher,
  • Thomas Skripuletz,
  • Majid Esmaeilzadeh,
  • Joachim K. Krauss,
  • Christian Hartmann,
  • Thomas M. Deserno,
  • Friedrich Feuerhake

DOI
https://doi.org/10.1038/s41598-024-51995-8
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 12

Abstract

Read online

Abstract Crowdsourcing has been used in computational pathology to generate cell and cell nuclei annotations for machine learning. Herein, we broaden its scope to the previously unsolved challenging task of glioma cell detection. This requires multiplexed immunofluorescence microscopy due to diffuse invasiveness and exceptional similarity between glioma cells and reactive astrocytes. In four pilot experiments, we iteratively developed a task design enabling high-quality annotations by crowdworkers on Amazon Mechanical Turk. We applied majority or weighted vote and validated them against ground truth in the final setting. On the base of a YOLO convolutional neural network architecture, we used these consensus labels for training with different image representations regarding colors, intensities, and immmunohistochemical marker combinations. A crowd of 712 workers defined aggregated point annotations in 235 images with an average $$F_1$$ F 1 score of 0.627 for majority vote. The networks resulted in acceptable $$F_1$$ F 1 scores up to 0.69 for YOLOv8 on average and indicated first evidence for transferability to images lacking tumor markers, especially in IDH-wildtype glioblastoma. Our work confirms feasibility of crowdsourcing to generate labels suitable for training of machine learning tools in the challenging and clinically relevant use case of glioma microenvironment.