Cognitive Research (May 2024)

Boosting wisdom of the crowd for medical image annotation using training performance and task features

  • Eeshan Hasan,
  • Erik Duhaime,
  • Jennifer S. Trueblood

DOI
https://doi.org/10.1186/s41235-024-00558-6
Journal volume & issue
Vol. 9, no. 1
pp. 1 – 21

Abstract

Read online

Abstract A crucial bottleneck in medical artificial intelligence (AI) is high-quality labeled medical datasets. In this paper, we test a large variety of wisdom of the crowd algorithms to label medical images that were initially classified by individuals recruited through an app-based platform. Individuals classified skin lesions from the International Skin Lesion Challenge 2018 into 7 different categories. There was a large dispersion in the geographical location, experience, training, and performance of the recruited individuals. We tested several wisdom of the crowd algorithms of varying complexity from a simple unweighted average to more complex Bayesian models that account for individual patterns of errors. Using a switchboard analysis, we observe that the best-performing algorithms rely on selecting top performers, weighting decisions by training accuracy, and take into account the task environment. These algorithms far exceed expert performance. We conclude by discussing the implications of these approaches for the development of medical AI.