JMIR Formative Research (Aug 2022)

A Comparison Between Clinical Guidelines and Real-World Treatment Data in Examining the Use of Session Summaries: Retrospective Study

  • Shiri Sadeh-Sharvit,
  • Simon A Rego,
  • Samuel Jefroykin,
  • Gal Peretz,
  • Tomer Kupershmidt

DOI
https://doi.org/10.2196/39846
Journal volume & issue
Vol. 6, no. 8
p. e39846

Abstract

Read online

BackgroundAlthough behavioral interventions have been found to be efficacious and effective in randomized clinical trials for most mental illnesses, the quality and efficacy of mental health care delivery remains inadequate in real-world settings, partly owing to suboptimal treatment fidelity. This “therapist drift” is an ongoing issue that ultimately reduces the effectiveness of treatments; however, until recently, there have been limited opportunities to assess adherence beyond large randomized controlled trials. ObjectiveThis study explored therapists’ use of a standard component that is pertinent across most behavioral treatments—prompting clients to summarize their treatment session as a means for consolidating and augmenting their understanding of the session and the treatment plan. MethodsThe data set for this study comprised 17,607 behavioral treatment sessions administered by 322 therapists to 3519 patients in 37 behavioral health care programs across the United States. Sessions were captured by a therapy-specific artificial intelligence (AI) platform, and an automatic speech recognition system transcribed the treatment meeting and separated the data to the therapist and client utterances. A search for possible session summary prompts was then conducted, with 2 psychologists validating the text that emerged. ResultsWe found that despite clinical recommendations, only 54 (0.30%) sessions included a summary. Exploratory analyses indicated that session summaries mostly addressed relationships (n=27), work (n=20), change (n=6), and alcohol (n=5). Sessions with meeting summaries were also characterized by greater therapist interventions and included greater use of validation, complex reflections, and proactive problem-solving techniques. ConclusionsTo the best of our knowledge, this is the first study to assess a large, diverse data set of real-world treatment practices. Our findings provide evidence that fidelity with the core components of empirically designed psychological interventions is a challenge in real-world settings. The results of this study can inform the development of machine learning and AI algorithms and offer nuanced, timely feedback to providers, thereby improving the delivery of evidence-based practices and quality of mental health care services and facilitating better clinical outcomes in real-world settings.