Interdisciplinary Journal of Virtual Learning in Medical Sciences (Dec 2023)

Assessment of the Capability of ChatGPT-3.5 in Medical Physiology Examination in an Indian Medical School

  • Himel Mondal,
  • Anup Kumar Dhanvijay,
  • Ayesha Juhi,
  • Amita Singh,
  • Mohammed Jaffer Pinjar,
  • Anita Kumari,
  • Swati Mittal,
  • Amita Kumari,
  • Shaikat Mondal

DOI
https://doi.org/10.30476/ijvlms.2023.98496.1221
Journal volume & issue
Vol. 14, no. 4
pp. 311 – 317

Abstract

Read online

Background: There has been increasing interest in exploring the capabilities of artificial intelligence (AI) in various fields, including education. Medical education is an area where AI can potentially have a significant impact, especially in helping students answer their customized questions. In this study, we aimed to investigate the capability of ChatGPT, a conversational AI model in generating answers to medical physiology exam questions in an Indian medical school.Methods: This cross-sectional study was conducted in March 2023 in an Indian Medical School, Deoghar, Jharkhand, India. The first mid-semester physiology examination was taken as the reference examination. There were two long essays, five short essay questions (total mark 40), and 20 multiple-choice questions (MCQ) (total mark 10). We generated the response from ChatGPT (in March 13 version) for both essay and MCQ questions. The essay-type answer sheet was evaluated by five faculties, and the average was taken as the final score. The score of 125 students (all first-year medical students) in the examination was obtained from the departmental registery. The median score of the 125 students was compared with the score of ChatGPT using Mann-Whitney U test.Results: The median score of 125 students in essay-type questions was 20.5 (Q1-Q3: 18-23.5) which corresponds to a median percentage of 51.25% (Q1-Q3: 45-58.75) (P=0.147). The answer generated by ChatGPT scored 21.5 (Q1-Q3: 21.5-22), which corresponds to 53.75% (Q1-Q3: 53.75-55) (P=0.125). Hence, ChatGPT scored like that of the students (P=0.4) in essay-type questions. In MCQ-type questions, ChatGPT answered 19 correctly in 20 questions (score=9.5), and this was higher than the median score of students (6) (Q1-Q3: 5-6.5) (P<0.0001).Conclusion: ChatGPT has the potential to generate answers to medical physiology examination questions. It has a higher capability to solve MCQ questions than essay-type ones. Although ChatGPT was able to provide answers that had the quality to pass the examination, the capability of generating high-quality answers for educational purposes is yet to be achieved. Hence, its usage in medical education for teaching and learning purposes is yet to be explored.

Keywords