PLoS ONE (Jan 2025)

Accuracy of latest large language models in answering multiple choice questions in dentistry: A comparative study.

  • Huy Cong Nguyen,
  • Hai Phong Dang,
  • Thuy Linh Nguyen,
  • Viet Hoang,
  • Viet Anh Nguyen

DOI
https://doi.org/10.1371/journal.pone.0317423
Journal volume & issue
Vol. 20, no. 1
p. e0317423

Abstract

Read online

ObjectivesThis study aims to evaluate the performance of the latest large language models (LLMs) in answering dental multiple choice questions (MCQs), including both text-based and image-based questions.Material and methodsA total of 1490 MCQs from two board review books for the United States National Board Dental Examination were selected. This study evaluated six of the latest LLMs as of August 2024, including ChatGPT 4.0 omni (OpenAI), Gemini Advanced 1.5 Pro (Google), Copilot Pro with GPT-4 Turbo (Microsoft), Claude 3.5 Sonnet (Anthropic), Mistral Large 2 (Mistral AI), and Llama 3.1 405b (Meta). χ2 tests were performed to determine whether there were significant differences in the percentages of correct answers among LLMs for both the total sample and each discipline (p ResultsSignificant differences were observed in the percentage of accurate answers among the six LLMs across text-based questions, image-based questions, and the total sample (pConclusionsNewer versions of LLMs demonstrate superior performance in answering dental MCQs compared to earlier versions. Copilot, Claude, and ChatGPT achieved high accuracy on text-based questions and low accuracy on image-based questions. LLMs capable of handling image-based questions demonstrated superior performance compared to LLMs limited to text-based questions.Clinical relevanceDental clinicians and students should prioritize the most up-to-date LLMs when supporting their learning, clinical practice, and research.