Digital Health (Feb 2019)

Adaptive design of a clinical decision support tool: What the impact on utilization rates means for future CDS research

  • Devin Mann,
  • Rachel Hess,
  • Thomas McGinn,
  • Rebecca Mishuris,
  • Sara Chokshi,
  • Lauren McCullagh,
  • Paul D. Smith,
  • Joseph Palmisano,
  • Safiya Richardson,
  • David A. Feldstein

DOI
https://doi.org/10.1177/2055207619827716
Journal volume & issue
Vol. 5

Abstract

Read online

OBJECTIVE We employed an agile, user-centered approach to the design of a clinical decision support tool in our prior integrated clinical prediction rule study, which achieved high adoption rates. To understand if applying this user-centered process to adapt clinical decision support tools is effective in improving the use of clinical prediction rules, we examined utilization rates of a clinical decision support tool adapted from the original integrated clinical prediction rule study tool to determine if applying this user-centered process to design yields enhanced utilization rates similar to the integrated clinical prediction rule study. MATERIALS & METHODS: We conducted pre-deployment usability testing and semi-structured group interviews at 6 months post-deployment with 75 providers at 14 intervention clinics across the two sites to collect user feedback. Qualitative data analysis is bifurcated into immediate and delayed stages; we reported on immediate-stage findings from real-time field notes used to generate a set of rapid, pragmatic recommendations for iterative refinement. Monthly utilization rates were calculated and examined over 12 months. RESULTS We hypothesized a well-validated, user-centered clinical decision support tool would lead to relatively high adoption rates. Then 6 months post-deployment, integrated clinical prediction rule study tool utilization rates were substantially lower than anticipated based on the original integrated clinical prediction rule study trial (68%) at 17% (Health System A) and 5% (Health System B). User feedback at 6 months resulted in recommendations for tool refinement, which were incorporated when possible into tool design; however, utilization rates at 12 months post-deployment remained low at 14% and 4% respectively. DISCUSSION Although valuable, findings demonstrate the limitations of a user-centered approach given the complexity of clinical decision support. CONCLUSION Strategies for addressing persistent external factors impacting clinical decision support adoption should be considered in addition to the user-centered design and implementation of clinical decision support.