JMIR Research Protocols (Dec 2024)

Evaluation Methods, Indicators, and Outcomes in Learning Health Systems: Protocol for a Jurisdictional Scan

  • Shelley Vanderhout,
  • Marissa Bird,
  • Antonia Giannarakos,
  • Balpreet Panesar,
  • Carly Whitmore

DOI
https://doi.org/10.2196/57929
Journal volume & issue
Vol. 13
p. e57929

Abstract

Read online

BackgroundIn learning health systems (LHSs), real-time evidence, informatics, patient-provider partnerships and experiences, and organizational culture are combined to conduct “learning cycles” that support improvements in care. Although the concept of LHSs is fairly well established in the literature, evaluation methods, mechanisms, and indicators are less consistently described. Furthermore, LHSs often use “usual care” or “status quo” as a benchmark for comparing new approaches to care, but disentangling usual care from multifarious care modalities found across settings is challenging. There is a need to identify which evaluation methods are used within LHSs, describe how LHS growth and maturity are conceptualized, and determine what tools and measures are being used to evaluate LHSs at the system level. ObjectiveThis study aimed to (1) identify international examples of LHSs and describe their evaluation approaches, frameworks, indicators, and outcomes; and (2) describe common characteristics, emphases, assumptions, or challenges in establishing counterfactuals in LHSs. MethodsA jurisdictional scan, which is a method used to explore, understand, and assess how problems have been framed by others in a given field, will be conducted according to modified PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. LHSs will be identified through a search of peer-reviewed and gray literature using Ovid MEDLINE, EBSCO CINAHL, Ovid Embase, Clarivate Web of Science, PubMed non-MEDLINE databases, and the web. We will describe evaluation approaches used both at the LHS learning cycle and system levels. To gain a comprehensive understanding of each LHS, including details specific to evaluation, self-identified LHSs will be included if they are described according to at least 4 of 11 prespecified criteria (core functionalities, analytics, use of evidence, co-design or implementation, evaluation, change management or governance structures, data sharing, knowledge sharing, training or capacity building, equity, and sustainability). Search results will be screened, extracted, and analyzed to inform a descriptive review pertaining to our main objectives. Evaluation methods and approaches, both within learning cycles and at the system level, as well as frameworks, indicators, and target outcomes, will be identified and summarized descriptively. Across evaluations, common challenges, assumptions, contextual factors, and mechanisms will be described. ResultsAs of October 2024, the database searches described above yielded 3503 citations after duplicate removal. Full-text screening of 117 articles is complete, and 49 articles are under analysis. Results are expected in early 2025. ConclusionsThis research will characterize the current landscape of LHS evaluation approaches and provide a foundation for developing consistent and scalable metrics of LHS growth, maturity, and success. This work will also serve to identify opportunities for improving the alignment of current evaluation approaches and metrics with population health needs, community priorities, equity, and health system strategic aims. Trial RegistrationOpen Science Framework b5u7e; https://osf.io/b5u7e International Registered Report Identifier (IRRID)DERR1-10.2196/57929