Implementation Research and Practice (Apr 2021)

A systematic review of measures of implementation players and processes: Summarizing the dearth of psychometric evidence

  • Caitlin N Dorsey,
  • Kayne D Mettert,
  • Ajeng J Puspitasari,
  • Laura J Damschroder,
  • Cara C Lewis

DOI
https://doi.org/10.1177/26334895211002474
Journal volume & issue
Vol. 2

Abstract

Read online

Background: Measurement is a critical component for any field. Systematic reviews are a way to locate measures and uncover gaps in current measurement practices. The present study identified measures used in behavioral health settings that assessed all constructs within the Process domain and two constructs from the Inner setting domain as defined by the Consolidated Framework for Implementation Research (CFIR). While previous conceptual work has established the importance social networks and key stakeholders play throughout the implementation process, measurement studies have not focused on investigating the quality of how these activities are being carried out. Methods: The review occurred in three phases: Phase I, data collection included (1) search string generation, (2) title and abstract screening, (3) full text review, (4) mapping to CFIR-constructs, and (5) “cited-by” searches. Phase II, data extraction, consisted of coding information relevant to the nine psychometric properties included in the Psychometric And Pragmatic Rating Scale (PAPERS). In Phase III, data analysis was completed. Results: Measures were identified in only seven constructs: Structural characteristics ( n = 13), Networks and communication ( n = 29), Engaging ( n = 1), Opinion leaders ( n = 5), Champions ( n = 5), Planning ( n = 5), and Reflecting and evaluating ( n = 5). No quantitative assessment measures of Formally appointed implementation leaders , External change agents , or Executing were identified. Internal consistency and norms were reported on most often, whereas no studies reported on discriminant validity or responsiveness. Not one measure in the sample reported all nine psychometric properties evaluated by the PAPERS. Scores in the identified sample of measures ranged from “-2” to “10” out of a total of “36.” Conclusions: Overall measures demonstrated minimal to adequate evidence and available psychometric information was limited. The majority were study specific, limiting their generalizability. Future work should focus on more rigorous measure development and testing of currently existing measures, while moving away from creating new, single use measures. Plain Language Summary: How we measure the processes and players involved for implementing evidence-based interventions is crucial to understanding what factors are helping or hurting the intervention’s use in practice and how to take the intervention to scale. Unfortunately, measures of these factors—stakeholders, their networks and communication, and their implementation activities—have received little attention. This study sought to identify and evaluate the quality of these types of measures. Our review focused on collecting measures used for identifying influential staff members, known as opinion leaders and champions, and investigating how they plan, execute, engage, and evaluate the hard work of implementation. Upon identifying these measures, we collected all published information about their uses to evaluate the quality of their evidence with respect to their ability to produce consistent results across items within each use (i.e., reliable) and if they assess what they are intending to measure (i.e., valid). Our searches located over 40 measures deployed in behavioral health settings for evaluation. We observed a dearth of evidence for reliability and validity and when evidence existed the quality was low. These findings tell us that more measurement work is needed to better understand how to optimize players and processes for the purposes of successful implementation.