BMC Research Notes (Sep 2024)

The performance of OpenAI ChatGPT-4 and Google Gemini in virology multiple-choice questions: a comparative analysis of English and Arabic responses

  • Malik Sallam,
  • Kholoud Al-Mahzoum,
  • Rawan Ahmad Almutawaa,
  • Jasmen Ahmad Alhashash,
  • Retaj Abdullah Dashti,
  • Danah Raed AlSafy,
  • Reem Abdullah Almutairi,
  • Muna Barakat

DOI
https://doi.org/10.1186/s13104-024-06920-7
Journal volume & issue
Vol. 17, no. 1
pp. 1 – 8

Abstract

Read online

Abstract Objective The integration of artificial intelligence (AI) in healthcare education is inevitable. Understanding the proficiency of generative AI in different languages to answer complex questions is crucial for educational purposes. The study objective was to compare the performance ChatGPT-4 and Gemini in answering Virology multiple-choice questions (MCQs) in English and Arabic, while assessing the quality of the generated content. Both AI models’ responses to 40 Virology MCQs were assessed for correctness and quality based on the CLEAR tool designed for evaluation of AI-generated content. The MCQs were classified into lower and higher cognitive categories based on the revised Bloom’s taxonomy. The study design considered the METRICS checklist for the design and reporting of generative AI-based studies in healthcare. Results ChatGPT-4 and Gemini performed better in English compared to Arabic, with ChatGPT-4 consistently surpassing Gemini in correctness and CLEAR scores. ChatGPT-4 led Gemini with 80% vs. 62.5% correctness in English compared to 65% vs. 55% in Arabic. For both AI models, superior performance in lower cognitive domains was reported. Both ChatGPT-4 and Gemini exhibited potential in educational applications; nevertheless, their performance varied across languages highlighting the importance of continued development to ensure the effective AI integration in healthcare education globally.