Scientific Reports (Jun 2022)

Towards interpretable, medically grounded, EMR-based risk prediction models

  • Isabell Twick,
  • Guy Zahavi,
  • Haggai Benvenisti,
  • Ronya Rubinstein,
  • Michael S. Woods,
  • Haim Berkenstadt,
  • Aviram Nissan,
  • Enes Hosgor,
  • Dan Assaf

DOI
https://doi.org/10.1038/s41598-022-13504-7
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 12

Abstract

Read online

Abstract Machine-learning based risk prediction models have the potential to improve patient outcomes by assessing risk more accurately than clinicians. Significant additional value lies in these models providing feedback about the factors that amplify an individual patient’s risk. Identification of risk factors enables more informed decisions on interventions to mitigate or ameliorate modifiable factors. For these reasons, risk prediction models must be explainable and grounded on medical knowledge. Current machine learning-based risk prediction models are frequently ‘black-box’ models whose inner workings cannot be understood easily, making it difficult to define risk drivers. Since machine learning models follow patterns in the data rather than looking for medically relevant relationships, possible risk factors identified by these models do not necessarily translate into actionable insights for clinicians. Here, we use the example of risk assessment for postoperative complications to demonstrate how explainable and medically grounded risk prediction models can be developed. Pre- and postoperative risk prediction models are trained based on clinically relevant inputs extracted from electronic medical record data. We show that these models have similar predictive performance as models that incorporate a wider range of inputs and explain the models’ decision-making process by visualizing how different model inputs and their values affect the models’ predictions.