Applied Medical Informatics (Sep 2024)

Evaluating the Ability of Chatbots to Answer Entrance Exam Questions for Postgraduate Studies in Medical Laboratory Sciences in Iran

  • Farhad AREFINIA,
  • Azamossadat HOSSEINI,
  • Farkhondeh ASADI,
  • Versa OMRANI-NAVA,
  • Raham NILOOFARI

Journal volume & issue
Vol. 46, no. 3

Abstract

Read online

As educational technology advances, the integration of Artificial Intelligence (AI)-driven chatbots in academic contexts becomes increasingly relevant. This study explored the performance of three advanced chatbots—ChatGPT 3.5, Claude, and Google Bard—in responding to entrance exam questions for Master's and PhD. programs in Medical Laboratory Sciences in Iran in 2023. Multiple-choice questions from entrance exams in Medical Laboratory Sciences Master's and PhD. programs held in 2023 were presented to ChatGPT 3.5, Claude, and Google Bard, and their responses were evaluated. The three chatbots—ChatGPT 3.5, Claude, and Google Bard—exhibited an overall accuracy of 38%, 42%, and 37%, respectively, showcasing a comparable baseline proficiency in addressing a variety of questions. Subject-specific analysis highlighted their strengths and weaknesses in different scientific domains. Our study shows that while the evaluated chatbots showed some ability in answering medical laboratory science questions, their performance remains insufficient for success in postgraduate entrance exams.

Keywords