IEEE Access (Jan 2022)

Privacy-Preserving Case-Based Explanations: Enabling Visual Interpretability by Protecting Privacy

  • Helena Montenegro,
  • Wilson Silva,
  • Alex Gaudio,
  • Matt Fredrikson,
  • Asim Smailagic,
  • Jaime S. Cardoso

DOI
https://doi.org/10.1109/ACCESS.2022.3157589
Journal volume & issue
Vol. 10
pp. 28333 – 28347

Abstract

Read online

Deep Learning achieves state-of-the-art results in many domains, yet its black-box nature limits its application to real-world contexts. An intuitive way to improve the interpretability of Deep Learning models is by explaining their decisions with similar cases. However, case-based explanations cannot be used in contexts where the data exposes personal identity, as they may compromise the privacy of individuals. In this work, we identify the main limitations and challenges in the anonymization of case-based explanations of image data through a survey on case-based interpretability and image anonymization methods. We empirically analyze the anonymization methods in regards to their capacity to remove personally identifiable information while preserving relevant semantic properties of the data. Through this analysis, we conclude that most privacy-preserving methods are not sufficiently good to be applied to case-based explanations. To promote research on this topic, we formalize the privacy protection of visual case-based explanations as a multi-objective problem to preserve privacy, intelligibility, and relevant explanatory evidence regarding a predictive task. We empirically verify the potential of interpretability saliency maps as qualitative evaluation tools for anonymization. Finally, we identify and propose new lines of research to guide future work in the generation of privacy-preserving case-based explanations.

Keywords