Communications Medicine (Sep 2024)

Unmasking and quantifying racial bias of large language models in medical report generation

  • Yifan Yang,
  • Xiaoyu Liu,
  • Qiao Jin,
  • Furong Huang,
  • Zhiyong Lu

DOI
https://doi.org/10.1038/s43856-024-00601-z
Journal volume & issue
Vol. 4, no. 1
pp. 1 – 6

Abstract

Read online

Abstract Background Large language models like GPT-3.5-turbo and GPT-4 hold promise for healthcare professionals, but they may inadvertently inherit biases during their training, potentially affecting their utility in medical applications. Despite few attempts in the past, the precise impact and extent of these biases remain uncertain. Methods We use LLMs to generate responses that predict hospitalization, cost and mortality based on real patient cases. We manually examine the generated responses to identify biases. Results We find that these models tend to project higher costs and longer hospitalizations for white populations and exhibit optimistic views in challenging medical scenarios with much higher survival rates. These biases, which mirror real-world healthcare disparities, are evident in the generation of patient backgrounds, the association of specific diseases with certain racial and ethnic groups, and disparities in treatment recommendations, etc. Conclusions Our findings underscore the critical need for future research to address and mitigate biases in language models, especially in critical healthcare applications, to ensure fair and accurate outcomes for all patients.