Advances in Medical Education and Practice (May 2017)
Investigating a self-scoring interview simulation for learning and assessment in the medical consultation
Abstract
Catherine Bruen,1 Clarence Kreiter,2 Vincent Wade,3 Teresa Pawlikowska1 1Health Professions Education Centre, Faculty of Medicine and Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland; 2Department of Family Medicine, Carver College of Medicine, University of Iowa, Iowa City, Iowa, USA; 3School of Computer Science and Statistics, Faculty of Engineering, Mathematics and Science, Trinity College Dublin, Dublin, Ireland Abstract: Experience with simulated patients supports undergraduate learning of medical consultation skills. Adaptive simulations are being introduced into this environment. The authors investigate whether it can underpin valid and reliable assessment by conducting a generalizability analysis using IT data analytics from the interaction of medical students (in psychiatry) with adaptive simulations to explore the feasibility of adaptive simulations for supporting automated learning and assessment. The generalizability (G) study was focused on two clinically relevant variables: clinical decision points and communication skills. While the G study on the communication skills score yielded low levels of true score variance, the results produced by the decision points, indicating clinical decision-making and confirming user knowledge of the process of the Calgary–Cambridge model of consultation, produced reliability levels similar to what might be expected with rater-based scoring. The findings indicate that adaptive simulations have potential as a teaching and assessment tool for medical consultations. Keywords: medical education, simulation technology, competency assessment, generalizability theory