PLoS ONE (Jan 2022)

Learning rate of students detecting and annotating pediatric wrist fractures in supervised artificial intelligence dataset preparations

  • Eszter Nagy,
  • Robert Marterer,
  • Franko Hržić,
  • Erich Sorantin,
  • Sebastian Tschauner

Journal volume & issue
Vol. 17, no. 10

Abstract

Read online

The use of artificial intelligence (AI) in image analysis is an intensively debated topic in the radiology community these days. AI computer vision algorithms typically rely on large-scale image databases, annotated by specialists. Developing and maintaining them is time-consuming, thus, the involvement of non-experts into the workflow of annotation should be considered. We assessed the learning rate of inexperienced evaluators regarding correct labeling of pediatric wrist fractures on digital radiographs. Students with and without a medical background labeled wrist fractures with bounding boxes in 7,000 radiographs over ten days. Pediatric radiologists regularly discussed their mistakes. We found F1 scores—as a measure for detection rate—to increase substantially under specialist feedback (mean 0.61±0.19 at day 1 to 0.97±0.02 at day 10, p<0.001), but not the Intersection over Union as a parameter for labeling precision (mean 0.27±0.29 at day 1 to 0.53±0.25 at day 10, p<0.001). The times needed to correct the students decreased significantly (mean 22.7±6.3 seconds per image at day 1 to 8.9±1.2 seconds at day 10, p<0.001) and were substantially lower as annotated by the radiologists alone. In conclusion our data showed, that the involvement of undergraduated students into annotation of pediatric wrist radiographs enables a substantial time saving for specialists, therefore, it should be considered.