npj Digital Medicine (Mar 2023)

An adversarial training framework for mitigating algorithmic biases in clinical machine learning

  • Jenny Yang,
  • Andrew A. S. Soltan,
  • David W. Eyre,
  • Yang Yang,
  • David A. Clifton

DOI
https://doi.org/10.1038/s41746-023-00805-y
Journal volume & issue
Vol. 6, no. 1
pp. 1 – 10

Abstract

Read online

Abstract Machine learning is becoming increasingly prominent in healthcare. Although its benefits are clear, growing attention is being given to how these tools may exacerbate existing biases and disparities. In this study, we introduce an adversarial training framework that is capable of mitigating biases that may have been acquired through data collection. We demonstrate this proposed framework on the real-world task of rapidly predicting COVID-19, and focus on mitigating site-specific (hospital) and demographic (ethnicity) biases. Using the statistical definition of equalized odds, we show that adversarial training improves outcome fairness, while still achieving clinically-effective screening performances (negative predictive values >0.98). We compare our method to previous benchmarks, and perform prospective and external validation across four independent hospital cohorts. Our method can be generalized to any outcomes, models, and definitions of fairness.