Scientific Reports (Oct 2023)

A vignette-based evaluation of ChatGPT’s ability to provide appropriate and equitable medical advice across care contexts

  • Anthony J. Nastasi,
  • Katherine R. Courtright,
  • Scott D. Halpern,
  • Gary E. Weissman

DOI
https://doi.org/10.1038/s41598-023-45223-y
Journal volume & issue
Vol. 13, no. 1
pp. 1 – 6

Abstract

Read online

Abstract ChatGPT is a large language model trained on text corpora and reinforced with human supervision. Because ChatGPT can provide human-like responses to complex questions, it could become an easily accessible source of medical advice for patients. However, its ability to answer medical questions appropriately and equitably remains unknown. We presented ChatGPT with 96 advice-seeking vignettes that varied across clinical contexts, medical histories, and social characteristics. We analyzed responses for clinical appropriateness by concordance with guidelines, recommendation type, and consideration of social factors. Ninety-three (97%) responses were appropriate and did not explicitly violate clinical guidelines. Recommendations in response to advice-seeking questions were completely absent (N = 34, 35%), general (N = 18, 18%), or specific (N = 44, 46%). 53 (55%) explicitly considered social factors like race or insurance status, which in some cases changed clinical recommendations. ChatGPT consistently provided background information in response to medical questions but did not reliably offer appropriate and personalized medical advice.