Diagnostics (Jul 2024)

Evaluating Large Language Model (LLM) Performance on Established Breast Classification Systems

  • Syed Ali Haider,
  • Sophia M. Pressman,
  • Sahar Borna,
  • Cesar A. Gomez-Cabello,
  • Ajai Sehgal,
  • Bradley C. Leibovich,
  • Antonio Jorge Forte

DOI
https://doi.org/10.3390/diagnostics14141491
Journal volume & issue
Vol. 14, no. 14
p. 1491

Abstract

Read online

Medical researchers are increasingly utilizing advanced LLMs like ChatGPT-4 and Gemini to enhance diagnostic processes in the medical field. This research focuses on their ability to comprehend and apply complex medical classification systems for breast conditions, which can significantly aid plastic surgeons in making informed decisions for diagnosis and treatment, ultimately leading to improved patient outcomes. Fifty clinical scenarios were created to evaluate the classification accuracy of each LLM across five established breast-related classification systems. Scores from 0 to 2 were assigned to LLM responses to denote incorrect, partially correct, or completely correct classifications. Descriptive statistics were employed to compare the performances of ChatGPT-4 and Gemini. Gemini exhibited superior overall performance, achieving 98% accuracy compared to ChatGPT-4’s 71%. While both models performed well in the Baker classification for capsular contracture and UTSW classification for gynecomastia, Gemini consistently outperformed ChatGPT-4 in other systems, such as the Fischer Grade Classification for gender-affirming mastectomy, Kajava Classification for ectopic breast tissue, and Regnault Classification for breast ptosis. With further development, integrating LLMs into plastic surgery practice will likely enhance diagnostic support and decision making.

Keywords