Canadian Medical Education Journal (Jan 2025)
Investigating the threat of AI to undergraduate medical school admissions: a study of its potential impact on the rating of applicant essays
Abstract
Background: Medical school applications often require short written essays or personal statements, which are purportedly used to assess professional qualities related to the practice of medicine. With generative artificial intelligence (AI) tools capable of supplementing or replacing inputs by human applicants, concerns about how these tools impact written assessments are growing. This study explores how AI influences the ratings of essays used for medical school admissions Methods: A within-subject experimental design was employed. Eight participants (academic clinicians, faculty researchers, medical students, and a community member) rated essays written by 24 undergraduate students and recent graduates from McMaster University. The students were divided into four groups: medical school aspirants with AI assistance (ASP-AI), aspirants without AI assistance (ASP), non-aspirants with AI assistance (NASP-AI), and essays generated solely by ChatGPT 3.5 (AI-ONLY). Participants were provided training in the application of single Likert scale tool before rating. Differences in ratings by writer group were determined via one-way between group ANOVA. Results: Analyses revealed no statistically significant differences in ratings across the four writer groups (p = .358). The intraclass correlation coefficient was .147. Conclusion: The proliferation of AI adds to prevailing questions about the value personal statements and essays have in supporting applicant selection. We speculate that these assessments hold less value than ever in providing authentic insight into applicant attributes. In this context, we suggest that medical schools move away from the use of essays in their admissions processes.