BMC Health Services Research (Nov 2024)

Implementation outcomes from a randomized, controlled trial of a strategy to improve integration of behavioral health and primary care services

  • Constance van Eeghen,
  • Jeni Soucie,
  • Jessica Clifton,
  • Juvena Hitt,
  • Brenda Mollis,
  • Gail L. Rose,
  • Sarah Hudson Scholle,
  • Kari A. Stephens,
  • Xiaofei Zhou,
  • Laura-Mae Baldwin

DOI
https://doi.org/10.1186/s12913-024-11801-7
Journal volume & issue
Vol. 24, no. 1
pp. 1 – 13

Abstract

Read online

Abstract Background Integrating behavioral health services in primary care is challenging; a toolkit approach to practice implementation can help. A recent comparative effectiveness randomized clinical trial examined the impact of a toolkit for improving integration on outcomes for patients with multiple chronic conditions. Some aspects of behavioral health integration improved; patient-reported outcomes did not. This report evaluates the implementation strategy (Toolkit) using Proctor’s (2011) implementation outcomes model. Methods Using data from the 20 practices randomized to the active (toolkit strategy) arm (education, redesign workbooks, online learning community, remote coaching), we identified 23 measures from practice member surveys, coach interviews, reports, and field logs to assess Toolkit acceptability, appropriateness, feasibility, and fidelity. A practice survey score was high (met expectations) if its average was ≥ 4 on a scale 1-5; all other data were coded dichotomously, with high = 1. Results Regarding acceptability, 74% (14) of practices had high scores for willingness of providers and staff to use the Toolkit and 68% (13) for quality improvement teams liking the Toolkit. For appropriateness, 95% (19) of practices had high scores for the structured process being a good match and 63% (12) for the Toolkit being a good match. Feasibility, measured by Toolkit prerequisites, was scored lower by site members at project end (e.g., provider leader available as champion: 53% of practices) compared to remote coaches observing practice teams (74%). For “do-ability,” coaches rated feasibility lower for practices (e.g., completion of workbook activities: 32%) than the practice teams (68%). Fidelity was low as assessed across seven measures, with 50% to 78% of practices having high scores across the seven measures. Conclusions Existing data from large trials can be used to describe implementation outcomes. The Toolkit was not implemented with fidelity in at least one quarter of the sites, despite being acceptable and appropriate, possibly due to low feasibility in the form of unmet prerequisites and Toolkit complexity. Variability in fidelity reflects the importance of implementation strategies that fit each organization, suggesting that further study on contextual factors and use of the Toolkit, as well as the relationship of Toolkit use and study outcomes, is needed. Trial registration ClinicalTrials.gov NCT02868983; date of registration: 08/15/2016.

Keywords