The reliability of the pass/fail decision for assessments comprised of multiple components

GMS Zeitschrift für Medizinische Ausbildung. 2015;32(4):Doc42 DOI 10.3205/zma000984

 

Journal Homepage

Journal Title: GMS Zeitschrift für Medizinische Ausbildung

ISSN: 1860-7446 (Print); 1860-3572 (Online)

Publisher: German Medical Science GMS Publishing House

Society/Institution: Gesellschaft für Medizinische Ausbildung

LCC Subject Category: Education: Special aspects of education | Medicine: Medicine (General)

Country of publisher: Germany

Language of fulltext: German, English

Full-text formats available: PDF, HTML, XML

 

AUTHORS

Möltner, Andreas (Ruprecht-Karls-Universität Heidelberg, Kompetenzzentrum Prüfungen in der Medizin Baden-Württemberg, Heidelberg, Deutschland)
Tımbıl, Sevgi (Ruprecht-Karls-Universität Heidelberg, Kompetenzzentrum Prüfungen in der Medizin Baden-Württemberg, Heidelberg, Deutschland)
Jünger, Jana (Ruprecht-Karls-Universität Heidelberg, Kompetenzzentrum Prüfungen in der Medizin Baden-Württemberg, Heidelberg, Deutschland)

EDITORIAL INFORMATION

Blind peer review

Editorial Board

Instructions for authors

Time From Submission to Publication: 25 weeks

 

Abstract | Full Text

Objective: The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values.Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When “conjunctively” combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010.Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements.Method: The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg’s Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy.Results: Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills.Conclusion: The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached – for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements.