PLoS ONE (Jan 2020)

Clinical Analytics Prediction Engine (CAPE): Development, electronic health record integration and prospective validation of hospital mortality, 180-day mortality and 30-day readmission risk prediction models.

  • Nirav Shah,
  • Chad Konchak,
  • Daniel Chertok,
  • Loretta Au,
  • Alex Kozlov,
  • Urmila Ravichandran,
  • Patrick McNulty,
  • Linning Liao,
  • Kate Steele,
  • Maureen Kharasch,
  • Chris Boyle,
  • Tom Hensing,
  • David Lovinger,
  • Jonathan Birnberg,
  • Anthony Solomonides,
  • Lakshmi Halasyamani

DOI
https://doi.org/10.1371/journal.pone.0238065
Journal volume & issue
Vol. 15, no. 8
p. e0238065

Abstract

Read online

BackgroundNumerous predictive models in the literature stratify patients by risk of mortality and readmission. Few prediction models have been developed to optimize impact while sustaining sufficient performance.ObjectiveWe aimed to derive models for hospital mortality, 180-day mortality and 30-day readmission, implement these models within our electronic health record and prospectively validate these models for use across an entire health system.Materials & methodsWe developed, integrated into our electronic health record and prospectively validated three predictive models using logistic regression from data collected from patients 18 to 99 years old who had an inpatient or observation admission at NorthShore University HealthSystem, a four-hospital integrated system in the United States, from January 2012 to September 2018. We analyzed the area under the receiver operating characteristic curve (AUC) for model performance.ResultsModels were derived and validated at three time points: retrospective, prospective at discharge, and prospective at 4 hours after presentation. AUCs of hospital mortality were 0.91, 0.89 and 0.77, respectively. AUCs for 30-day readmission were 0.71, 0.71 and 0.69, respectively. 180-day mortality models were only retrospectively validated with an AUC of 0.85.DiscussionWe were able to retain good model performance while optimizing potential model impact by also valuing model derivation efficiency, usability, sensitivity, generalizability and ability to prescribe timely interventions to reduce underlying risk. Measuring model impact by tying prediction models to interventions that are then rapidly tested will establish a path for meaningful clinical improvement and implementation.