IEEE Access (Jan 2022)

Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable Methods

  • Mafalda Malafaia,
  • Francisco Silva,
  • Ines Neves,
  • Tania Pereira,
  • Helder P. Oliveira

DOI
https://doi.org/10.1109/ACCESS.2022.3214824
Journal volume & issue
Vol. 10
pp. 112731 – 112741

Abstract

Read online

Deep Learning (DL) based classification algorithms have been shown to achieve top results in clinical diagnosis, namely with lung cancer datasets. However, the complexity and opaqueness of the models together with the still scant training datasets call for the development of explainable modeling methods enabling the interpretation of the results. To this end, in this paper we propose a novel interpretability approach and demonstrate how it can be used on a malignancy lung cancer DL classifier to assess its stability and congruence even when fed a low amount of image samples. Additionally, by disclosing the regions of the medical images most relevant to the resulting classification the approach provides important insights to the correspondent clinical meaning apprehended by the algorithm. Explanations of the results provided by ten different models against the same test sample are compared. These attest the stability of the approach and the algorithm focus on the same image regions.

Keywords