IEEE Access (Jan 2021)

Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis

  • Helena Montenegro,
  • Wilson Silva,
  • Jaime S. Cardoso

DOI
https://doi.org/10.1109/ACCESS.2021.3124844
Journal volume & issue
Vol. 9
pp. 148037 – 148047

Abstract

Read online

Although Deep Learning models have achieved incredible results in medical image classification tasks, their lack of interpretability hinders their deployment in the clinical context. Case-based interpretability provides intuitive explanations, as it is a much more human-like approach than saliency-map-based interpretability. Nonetheless, since one is dealing with sensitive visual data, there is a high risk of exposing personal identity, threatening the individuals’ privacy. In this work, we propose a privacy-preserving generative adversarial network for the privatization of case-based explanations. We address the weaknesses of current privacy-preserving methods for visual data from three perspectives: realism, privacy, and explanatory value. We also introduce a counterfactual module in our Generative Adversarial Network that provides counterfactual case-based explanations in addition to standard factual explanations. Experiments were performed in a biometric and medical dataset, demonstrating the network’s potential to preserve the privacy of all subjects and keep its explanatory evidence while also maintaining a decent level of intelligibility.

Keywords