Advances in Medical Education and Practice (Feb 2015)
Do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation?
Abstract
Mariam Fida,1 Salah Eldin Kassab2 1Department of Molecular Medicine, College of Medicine and Medical Sciences, Arabian Gulf University, Manama, Bahrain; 2Department of Medical Education, Faculty of Medicine, Suez Canal University, Ismailia, Egypt Purpose: The development of clinical problem-solving skills evolves over time and requires structured training and background knowledge. Computer-based case simulations (CCS) have been used for teaching and assessment of clinical reasoning skills. However, previous studies examining the psychometric properties of CCS as an assessment tool have been controversial. Furthermore, studies reporting the integration of CCS into problem-based medical curricula have been limited. Methods: This study examined the psychometric properties of using CCS software (DxR Clinician) for assessment of medical students (n=130) studying in a problem-based, integrated multisystem module (Unit IX) during the academic year 2011–2012. Internal consistency reliability of CCS scores was calculated using Cronbach's alpha statistics. The relationships between students' scores in CCS components (clinical reasoning, diagnostic performance, and patient management) and their scores in other examination tools at the end of the unit including multiple-choice questions, short-answer questions, objective structured clinical examination (OSCE), and real patient encounters were analyzed using stepwise hierarchical linear regression. Results: Internal consistency reliability of CCS scores was high (α=0.862). Inter-item correlations between students' scores in different CCS components and their scores in CCS and other test items were statistically significant. Regression analysis indicated that OSCE scores predicted 32.7% and 35.1% of the variance in clinical reasoning and patient management scores, respectively (P<0.01). Multiple-choice question scores, however, predicted only 15.4% of the variance in diagnostic performance scores (P<0.01), while students’ scores in real patient encounters did not predict any of the CCS scores. Conclusion: Students’ scores in OSCE are the most important predictors of their scores in clinical reasoning and patient management using CCS. However, real patient encounter assessment does not appear to test a construct similar to what is tested in CCS. Keywords: medical education, computer-based simulations, virtual patients, student assessment, PBL, Bahrain