Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education
Jorie M. Colbert-Getz,
Michael Ryan,
Erin Hennessey,
Brenessa Lindeman,
Brian Pitts,
Kim A. Rutherford,
Deborah Schwengel,
Stephen M. Sozio,
Jessica George,
Julianna Jung
Affiliations
Jorie M. Colbert-Getz
Assistant Dean of Assessment and Evaluation, University of Utah School of Medicine; Assistant Professor, Department of Internal Medicine, University of Utah School of Medicine
Michael Ryan
Assistant Dean for Clinical Medical Education, Virginia Commonwealth University School of Medicine; Associate Professor, Department of Pediatrics, Virginia Commonwealth University School of Medicine
Erin Hennessey
Program Director for the Anesthesia Critical Care Medicine Fellowship, Stanford University School of Medicine; Clinical Assistant Professor, Department of Anesthesia and Critical Care Medicine, Stanford University School of Medicine
Brenessa Lindeman
Fellow and Associate Surgeon, Department of Surgery, Brigham and Women's Hospital; Instructor of Surgery, Harvard Medical School
Brian Pitts
Associate Professor, Department of Anesthesiology, University of California, Davis, School of Medicine
Kim A. Rutherford
Assistant Professor, Departments of Pediatrics and Emergency Medicine, Pennsylvania State University College of Medicine
Deborah Schwengel
Program Director for the Anesthesiology Residency Program, Johns Hopkins University School of Medicine; Assistant Professor, Departments of Anesthesiology, Critical Care Medicine, and Pediatrics, Johns Hopkins University School of Medicine
Stephen M. Sozio
Associate Director of the Nephrology Fellowship Program, Johns Hopkins University School of Medicine; Assistant Professor, Department of Medicine, Johns Hopkins University School of Medicine
Jessica George
Assistant Professor of Anesthesiology and Critical Care Medicine, Johns Hopkins University School of Medicine
Julianna Jung
Associate Director, Johns Hopkins Medicine Simulation Center; Associate Professor, Department of Emergency Medicine, Johns Hopkins University School of Medicine
Introduction Prior research has identified seven elements of a good assessment, but the elements have not been operationalized in the form of a rubric to rate assessment utility. It would be valuable for medical educators to have a systematic way to evaluate the utility of an assessment in order to determine if the assessment used is optimal for the setting. Methods We developed and refined an assessment utility rubric using a modified Delphi process. Twenty-nine graduate students pilot-tested the rubric in 2016 with hypothetical data from three examinations, and interrater reliability of rubric scores was measured with interclass correlation coefficients (ICCs). Results Consensus for all rubric items was reached after three rounds. The resulting assessment utility rubric includes four elements (equivalence, educational effect, catalytic effect, acceptability) with three items each, one element (validity evidence) with five items, and space to provide four feasibility items relating to time and cost. Rater scores had ICC values greater than .75. Discussion The rubric shows promise in allowing educators to evaluate the utility of an assessment specific to their setting. The medical education field needs to give more consideration to how an assessment drives learning forward, how it motivates trainees, and whether it produces acceptable ranges of scores for all stakeholders.