Scientific Reports (Aug 2024)

AI chatbots show promise but limitations on UK medical exam questions: a comparative performance study

  • Mohammed Ahmed Sadeq,
  • Reem Mohamed Farouk Ghorab,
  • Mohamed Hady Ashry,
  • Ahmed Mohamed Abozaid,
  • Haneen A. Banihani,
  • Moustafa Salem,
  • Mohammed Tawfiq Abu Aisheh,
  • Saad Abuzahra,
  • Marina Ramzy Mourid,
  • Mohamad Monif Assker,
  • Mohammed Ayyad,
  • Mostafa Hossam El Din Moawad

DOI
https://doi.org/10.1038/s41598-024-68996-2
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 11

Abstract

Read online

Abstract Large language models (LLMs) like ChatGPT have potential applications in medical education such as helping students study for their licensing exams by discussing unclear questions with them. However, they require evaluation on these complex tasks. The purpose of this study was to evaluate how well publicly accessible LLMs performed on simulated UK medical board exam questions. 423 board-style questions from 9 UK exams (MRCS, MRCP, etc.) were answered by seven LLMs (ChatGPT-3.5, ChatGPT-4, Bard, Perplexity, Claude, Bing, Claude Instant). There were 406 multiple-choice, 13 true/false, and 4 "choose N" questions covering topics in surgery, pediatrics, and other disciplines. The accuracy of the output was graded. Statistics were used to analyze differences among LLMs. Leaked questions were excluded from the primary analysis. ChatGPT 4.0 scored (78.2%), Bing (67.2%), Claude (64.4%), and Claude Instant (62.9%). Perplexity scored the lowest (56.1%). Scores differed significantly between LLMs overall (p < 0.001) and in pairwise comparisons. All LLMs scored higher on multiple-choice vs true/false or “choose N” questions. LLMs demonstrated limitations in answering certain questions, indicating refinements needed before primary reliance in medical education. However, their expanding capabilities suggest a potential to improve training if thoughtfully implemented. Further research should explore specialty specific LLMs and optimal integration into medical curricula.