IEEE Access (Jan 2024)

Counterfactual and Prototypical Explanations for Tabular Data via Interpretable Latent Space

  • Simone Piaggesi,
  • Francesco Bodria,
  • Riccardo Guidotti,
  • Fosca Giannotti,
  • Dino Pedreschi

DOI
https://doi.org/10.1109/ACCESS.2024.3496114
Journal volume & issue
Vol. 12
pp. 168983 – 169000

Abstract

Read online

Artificial Intelligence decision-making systems have dramatically increased their predictive power in recent years, beating humans in many different specific tasks. However, with increased performance has come an increase in the complexity of the black-box models adopted by the AI systems, making them entirely obscure for the decision process adopted. Explainable AI is a field that seeks to make AI decisions more transparent by producing explanations. In this paper, we propose CP-ILS, a comprehensive interpretable feature reduction method for tabular data capable of generating Counterfactual and Prototypical post-hoc explanations using an Interpretable Latent Space. CP-ILS optimizes a transparent feature space whose similarity and linearity properties enable the easy extraction of local and global explanations for any pre-trained black-box model, in the form of counterfactual/prototype pairs. We evaluated the effectiveness of the created latent space by showing its capability to preserve pair-wise similarities like well-known dimensionality reduction techniques. Moreover, we assessed the quality of counterfactuals and prototypes generated with CP-ILS against state-of-the-art explainers, demonstrating that our approach obtains more robust, plausible, and accurate explanations than its competitors under most experimental conditions.

Keywords