IEEE Access (Jan 2025)
Prototype-Based Explanation for Semantic Gap Reduction With Distributional Embedding
Abstract
The demand for interpretable models has driven the exploration of explainable approaches grounded in human-friendly case-based reasoning. Among these approaches, prototype-based methods have proven effective in performing case-based reasoning by utilizing prototypes and similarity scores. However, their interpretability is affected by degraded similarity in the input space and latent space. This semantic gap leads to inconsistent explanation for images that are perceived to be similar, which undermines the reliability of the explanation. In this paper, we propose a distributional embedding framework in which the embedding is randomly sampled from a parameterized distribution in a regularized latent space. With a simple modification, our method significantly improves the reliability of the model’s explanation by bridging the gap between similarity in human perception and explanation. To demonstrate this, we conduct experiments ranging from small-scale scenarios to direct explanation regarding similarity. Extensive comparisons with a real-world dataset and multiple backbone networks showcase the usability and efficacy of the proposed framework.
Keywords