IEEE Access (Jan 2021)

Interpretable Pneumonia Detection by Combining Deep Learning and Explainable Models With Multisource Data

  • Hao Ren,
  • Aslan B. Wong,
  • Wanmin Lian,
  • Weibin Cheng,
  • Ying Zhang,
  • Jianwei He,
  • Qingfeng Liu,
  • Jiasheng Yang,
  • Chen Jason Zhang,
  • Kaishun Wu,
  • Haodi Zhang

DOI
https://doi.org/10.1109/ACCESS.2021.3090215
Journal volume & issue
Vol. 9
pp. 95872 – 95883

Abstract

Read online

With the rapid development of AI techniques, Computer-aided Diagnosis has attracted much attention and has been successfully deployed in many applications of health care and medical diagnosis. For some specific tasks, the learning-based system can compare with or even outperform human experts’ performance. The impressive performance owes to the excellent expressiveness and scalability of the neural networks, although the models’ intuition usually cannot be represented explicitly. Interpretability is, however, very important, even the same as the diagnosis precision, for computer-aided diagnosis. To fill this gap, our approach is intuitive to detect pneumonia interpretably. We first build a large dataset of community-acquired pneumonia consisting of 35389 cases (distinguished from nosocomial pneumonia) based on actual medical records. Second, we train a prediction model with the chest X-ray images in our dataset, capable of precisely detecting pneumonia. Third, we propose an intuitive approach to combine neural networks with an explainable model such as the Bayesian Network. The experiment result shows that our proposal further improves the performance by using multi-source data and provides intuitive explanations for the diagnosis results.

Keywords