Digital Health (Jul 2024)

Diagnostic performance of generative artificial intelligences for a series of complex case reports

  • Takanobu Hirosawa,
  • Yukinori Harada,
  • Kazuya Mizuta,
  • Tetsu Sakamoto,
  • Kazuki Tokumasu,
  • Taro Shimizu

DOI
https://doi.org/10.1177/20552076241265215
Journal volume & issue
Vol. 10

Abstract

Read online

Background Diagnostic performance of generative artificial intelligences (AIs) using large language models (LLMs) across comprehensive medical specialties is still unknown. Objective We aimed to evaluate the diagnostic performance of generative AIs using LLMs in complex case series across comprehensive medical fields. Methods We analyzed published case reports from the American Journal of Case Reports from January 2022 to March 2023. We excluded pediatric cases and those primarily focused on management. We utilized three generative AIs to generate the top 10 differential-diagnosis (DDx) lists from case descriptions: the fourth-generation chat generative pre-trained transformer (ChatGPT-4), Google Gemini (previously Bard), and LLM Meta AI 2 (LLaMA2) chatbot. Two independent physicians assessed the inclusion of the final diagnosis in the lists generated by the AIs. Results Out of 557 consecutive case reports, 392 were included. The inclusion rates of the final diagnosis within top 10 DDx lists were 86.7% (340/392) for ChatGPT-4, 68.6% (269/392) for Google Gemini, and 54.6% (214/392) for LLaMA2 chatbot. The top diagnoses matched the final diagnoses in 54.6% (214/392) for ChatGPT-4, 31.4% (123/392) for Google Gemini, and 23.0% (90/392) for LLaMA2 chatbot. ChatGPT-4 showed higher diagnostic accuracy than Google Gemini ( P < 0.001) and LLaMA2 chatbot ( P < 0.001). Additionally, Google Gemini outperformed LLaMA2 chatbot within the top 10 DDx lists ( P < 0.001) and as the top diagnosis ( P = 0.010). Conclusions This study demonstrated the diagnostic performance of generative AIs including ChatGPT-4, Google Gemini, and LLaMA2 chatbot. ChatGPT-4 exhibited higher diagnostic accuracy than the other platforms. These findings suggest the importance of understanding the differences in diagnostic performance among generative AIs, especially in complex case series across comprehensive medical fields, like general medicine.