MedEdPORTAL (May 2017)

Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education

  • Jorie M. Colbert-Getz,
  • Michael Ryan,
  • Erin Hennessey,
  • Brenessa Lindeman,
  • Brian Pitts,
  • Kim A. Rutherford,
  • Deborah Schwengel,
  • Stephen M. Sozio,
  • Jessica George,
  • Julianna Jung

DOI
https://doi.org/10.15766/mep_2374-8265.10588
Journal volume & issue
Vol. 13

Abstract

Read online

Introduction Prior research has identified seven elements of a good assessment, but the elements have not been operationalized in the form of a rubric to rate assessment utility. It would be valuable for medical educators to have a systematic way to evaluate the utility of an assessment in order to determine if the assessment used is optimal for the setting. Methods We developed and refined an assessment utility rubric using a modified Delphi process. Twenty-nine graduate students pilot-tested the rubric in 2016 with hypothetical data from three examinations, and interrater reliability of rubric scores was measured with interclass correlation coefficients (ICCs). Results Consensus for all rubric items was reached after three rounds. The resulting assessment utility rubric includes four elements (equivalence, educational effect, catalytic effect, acceptability) with three items each, one element (validity evidence) with five items, and space to provide four feasibility items relating to time and cost. Rater scores had ICC values greater than .75. Discussion The rubric shows promise in allowing educators to evaluate the utility of an assessment specific to their setting. The medical education field needs to give more consideration to how an assessment drives learning forward, how it motivates trainees, and whether it produces acceptable ranges of scores for all stakeholders.

Keywords