npj Digital Medicine (Dec 2024)

Autonomous medical evaluation for guideline adherence of large language models

  • Dennis Fast,
  • Lisa C. Adams,
  • Felix Busch,
  • Conor Fallon,
  • Marc Huppertz,
  • Robert Siepmann,
  • Philipp Prucker,
  • Nadine Bayerl,
  • Daniel Truhn,
  • Marcus Makowski,
  • Alexander Löser,
  • Keno K. Bressem

DOI
https://doi.org/10.1038/s41746-024-01356-6
Journal volume & issue
Vol. 7, no. 1
pp. 1 – 14

Abstract

Read online

Abstract Autonomous Medical Evaluation for Guideline Adherence (AMEGA) is a comprehensive benchmark designed to evaluate large language models’ adherence to medical guidelines across 20 diagnostic scenarios spanning 13 specialties. It includes an evaluation framework and methodology to assess models’ capabilities in medical reasoning, differential diagnosis, treatment planning, and guideline adherence, using open-ended questions that mirror real-world clinical interactions. It includes 135 questions and 1337 weighted scoring elements designed to assess comprehensive medical knowledge. In tests of 17 LLMs, GPT-4 scored highest with 41.9/50, followed closely by Llama-3 70B and WizardLM-2-8x22B. For comparison, a recent medical graduate scored 25.8/50. The benchmark introduces novel content to avoid the issue of LLMs memorizing existing medical data. AMEGA’s publicly available code supports further research in AI-assisted clinical decision-making, aiming to enhance patient care by aiding clinicians in diagnosis and treatment under time constraints.