BMC Medical Informatics and Decision Making (Jul 2020)

Interpretable clinical prediction via attention-based neural network

  • Peipei Chen,
  • Wei Dong,
  • Jinliang Wang,
  • Xudong Lu,
  • Uzay Kaymak,
  • Zhengxing Huang

DOI
https://doi.org/10.1186/s12911-020-1110-7
Journal volume & issue
Vol. 20, no. S3
pp. 1 – 9

Abstract

Read online

Abstract Background The interpretability of results predicted by the machine learning models is vital, especially in the critical fields like healthcare. With the increasingly adoption of electronic healthcare records (EHR) by the medical organizations in the last decade, which accumulated abundant electronic patient data, neural networks or deep learning techniques are gradually being applied to clinical tasks by utilizing the huge potential of EHR data. However, typical deep learning models are black-boxes, which are not transparent and the prediction outcomes of which are difficult to interpret. Methods To remedy this limitation, we propose an attention neural network model for interpretable clinical prediction. In detail, the proposed model employs an attention mechanism to capture critical/essential features with their attention signals on the prediction results, such that the predictions generated by the neural network model can be interpretable. Results We evaluate our proposed model on a real-world clinical dataset consisting of 736 samples to predict readmissions for heart failure patients. The performance of the proposed model achieved 66.7 and 69.1% in terms of accuracy and AUC, respectively, and outperformed the baseline models. Besides, we displayed patient-specific attention weights, which can not only help clinicians understand the prediction outcomes, but also assist them to select individualized treatment strategies or intervention plans. Conclusions The experimental results demonstrate that the proposed model can improve both the prediction performance and interpretability by equipping the model with an attention mechanism.

Keywords