Scientific Reports (Apr 2022)

Differences between human and machine perception in medical diagnosis

  • Taro Makino,
  • Stanisław Jastrzębski,
  • Witold Oleszkiewicz,
  • Celin Chacko,
  • Robin Ehrenpreis,
  • Naziya Samreen,
  • Chloe Chhor,
  • Eric Kim,
  • Jiyon Lee,
  • Kristine Pysarenko,
  • Beatriu Reig,
  • Hildegard Toth,
  • Divya Awal,
  • Linda Du,
  • Alice Kim,
  • James Park,
  • Daniel K. Sodickson,
  • Laura Heacock,
  • Linda Moy,
  • Kyunghyun Cho,
  • Krzysztof J. Geras

DOI
https://doi.org/10.1038/s41598-022-10526-z
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 13

Abstract

Read online

Abstract Deep neural networks (DNNs) show promise in image-based medical diagnosis, but cannot be fully trusted since they can fail for reasons unrelated to underlying pathology. Humans are less likely to make such superficial mistakes, since they use features that are grounded on medical science. It is therefore important to know whether DNNs use different features than humans. Towards this end, we propose a framework for comparing human and machine perception in medical diagnosis. We frame the comparison in terms of perturbation robustness, and mitigate Simpson’s paradox by performing a subgroup analysis. The framework is demonstrated with a case study in breast cancer screening, where we separately analyze microcalcifications and soft tissue lesions. While it is inconclusive whether humans and DNNs use different features to detect microcalcifications, we find that for soft tissue lesions, DNNs rely on high frequency components ignored by radiologists. Moreover, these features are located outside of the region of the images found most suspicious by radiologists. This difference between humans and machines was only visible through subgroup analysis, which highlights the importance of incorporating medical domain knowledge into the comparison.