npj Digital Medicine (Nov 2024)

Simulated misuse of large language models and clinical credit systems

  • James T. Anibal,
  • Hannah B. Huth,
  • Jasmine Gunkel,
  • Susan K. Gregurick,
  • Bradford J. Wood

DOI
https://doi.org/10.1038/s41746-024-01306-2
Journal volume & issue
Vol. 7, no. 1
pp. 1 – 10

Abstract

Read online

Abstract In the future, large language models (LLMs) may enhance the delivery of healthcare, but there are risks of misuse. These methods may be trained to allocate resources via unjust criteria involving multimodal data - financial transactions, internet activity, social behaviors, and healthcare information. This study shows that LLMs may be biased in favor of collective/systemic benefit over the protection of individual rights and could facilitate AI-driven social credit systems.