Nature Communications (Oct 2023)

Improving model fairness in image-based computer-aided diagnosis

  • Mingquan Lin,
  • Tianhao Li,
  • Yifan Yang,
  • Gregory Holste,
  • Ying Ding,
  • Sarah H. Van Tassel,
  • Kyle Kovacs,
  • George Shih,
  • Zhangyang Wang,
  • Zhiyong Lu,
  • Fei Wang,
  • Yifan Peng

DOI
https://doi.org/10.1038/s41467-023-41974-4
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 9

Abstract

Read online

Abstract Deep learning has become a popular tool for computer-aided diagnosis using medical images, sometimes matching or exceeding the performance of clinicians. However, these models can also reflect and amplify human bias, potentially resulting inaccurate missed diagnoses. Despite this concern, the problem of improving model fairness in medical image classification by deep learning has yet to be fully studied. To address this issue, we propose an algorithm that leverages the marginal pairwise equal opportunity to reduce bias in medical image classification. Our evaluations across four tasks using four independent large-scale cohorts demonstrate that our proposed algorithm not only improves fairness in individual and intersectional subgroups but also maintains overall performance. Specifically, the relative change in pairwise fairness difference between our proposed model and the baseline model was reduced by over 35%, while the relative change in AUC value was typically within 1%. By reducing the bias generated by deep learning models, our proposed approach can potentially alleviate concerns about the fairness and reliability of image-based computer-aided diagnosis.