Journal of Medical Education and Curricular Development (Feb 2022)

Assessing a Capstone Research Project in Medical Training: Examiner Consistency Using Generic Versus Domain-Specific Rubrics

  • Katharine J. Reid,
  • Neville G. Chiavaroli,
  • Justin L. C. Bilszta

DOI
https://doi.org/10.1177/23821205221081813
Journal volume & issue
Vol. 9

Abstract

Read online

Rubrics are utilized extensively in tertiary contexts to assess student performance on written tasks; however, their use for assessment of research projects has received little attention. In particular, there is little evidence on the reliability of examiner judgements according to rubric type (general or specific) in a research context. This research examines the concordance between pairs of examiners assessing a medical student research project during a two-year period employing a generic rubric followed by a subsequent two-year implementation of task-specific rubrics. Following examiner feedback, and with consideration to the available literature, we expected the task-specific rubrics would increase the consistency of examiner judgements and reduce the need for arbitration due to discrepant marks. However, in contrast, results showed that generic rubrics provided greater consistency of examiner judgements and fewer arbitrations compared with the task-specific rubrics. These findings have practical implications for educational practise in the assessment of research projects and contribute valuable empirical evidence to inform the development and use of rubrics in medical education.