IEEE Access (Jan 2023)

Ensembling to Leverage the Interpretability of Medical Image Analysis Systems

  • Argyrios Zafeiriou,
  • Athanasios Kallipolitis,
  • Ilias Maglogiannis

DOI
https://doi.org/10.1109/ACCESS.2023.3291610
Journal volume & issue
Vol. 11
pp. 76437 – 76447

Abstract

Read online

Along with the increase in the accuracy of artificial intelligence systems, complexity has also risen. Despite high accuracy, high-risk decision-making requires explanations about the model’s decision, which often take the form of saliency maps. This work examines the efficacy of ensembling deep convolutional neural networks to leverage explanations, under the concept that ensemble models are combinatory informed. A novel approach is presented for aggregating saliency maps derived from multiple base models, as an alternative way of combining the different perspectives that several competent models offer. The proposed methodology lowers computation costs, while allowing for the combinations of maps of various origins. Following a saliency map evaluation scheme, four tests are performed over three image datasets, two medical image datasets and one generic. The results suggest that interpretability is improved by combining information through the aggregation scheme. The discussion that follows provides insights into the inner workings behind the results, such as the specific combination of the interpretability and ensemble methods, and offers useful suggestions for future work.

Keywords