MedEdPORTAL (Dec 2022)

A Scoring Rubric for the Knowledge Section of the Systems Quality Improvement Training and Assessment Tool

  • Corrine Abraham,
  • Krysta Johnson-Martinez,
  • Anne Tomolo

DOI
https://doi.org/10.15766/mep_2374-8265.11290
Journal volume & issue
Vol. 18

Abstract

Read online

Introduction Quality improvement (QI) competencies for health professions trainees were developed to address health care quality. Strategies to integrate QI into curricula exist, but methods for assessing interdisciplinary learners’ competency are less developed. We refined the Knowledge section scoring rubric of the Systems Quality Improvement Training and Assessment Tool (SQI TAT) and examined its validity evidence. Methods In 2017, the SQI TAT Knowledge section was expanded to cover seven core QI concepts, and the scoring rubric was refined. Three coders independently scored 35 SQI TAT Knowledge sections (18 pretests, 17 posttests). Interrater reliability was assessed by percent agreement and Cohen's kappa for individual variables and by Lin's concordance correlation for total scores for knowledge and application. Concurrent validity was assessed by comparing responses from two groups with different QI exposure and evaluating whether differences in exposure were measured. Results Total-score interrater reliability average measures of concordance were .89 for all coders and >.70 for six of seven concept scores. The total score discriminated the two groups (p <. 05), and five of seven concept scores were higher for the group with more QI experience. Total scores were significantly higher posttest than pretest (p < .001), with improvement in posttest knowledge scores. Discussion The SQI TAT Knowledge section provides a comprehensive assessment of QI knowledge. The scoring rubric was able to discriminate QI knowledge along a continuum. The SQI TAT Knowledge section is not linked to a clinical context, making it useful for assessing interprofessional learners and varying education levels.

Keywords