Advances in Medical Education and Practice (May 2024)

Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom’s Taxonomy

  • Bharatha A,
  • Ojeh N,
  • Fazle Rabbi AM,
  • Campbell MH,
  • Krishnamurthy K,
  • Layne-Yarde RNA,
  • Kumar A,
  • Springer DCR,
  • Connell KL,
  • Majumder MAA

Journal volume & issue
Vol. Volume 15
pp. 393 – 400

Abstract

Read online

Ambadasu Bharatha,1 Nkemcho Ojeh,1 Ahbab Mohammad Fazle Rabbi,2 Michael H Campbell,1 Kandamaran Krishnamurthy,1 Rhaheem NA Layne-Yarde,1 Alok Kumar,1 Dale CR Springer,1 Kenneth L Connell,1 Md Anwarul Azim Majumder1 1Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados; 2Department of Population Sciences, University of Dhaka, Dhaka, BangladeshCorrespondence: Md Anwarul Azim Majumder, Director of Medical Education, Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Barbados, Email [email protected] Ambadasu Bharatha, Lecturer in Pharmacology, Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Barbados, Email [email protected]: This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom’s Taxonomy as a benchmark.Methods: A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer-based testing.Results: The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4’s performance, but revised Bloom’s Taxonomy levels did not. A detailed association check between program levels and Bloom’s taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p< 0.001), reflecting a concentration of “remember-level” questions in preclinical and “evaluate-level” questions in clinical courses.Discussion: The study highlights ChatGPT-4’s proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content.Conclusion: While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI’s impact on medical education and student performance across educational levels and courses.Keywords: artificial intelligence, ChatGPT-4’s, medical students, knowledge, interpretation abilities, multiple choice questions

Keywords