IEEE Access (Jan 2025)

HuberAIME: A Robust Approach to Explainable AI in the Presence of Outliers

  • Takafumi Nakanishi

DOI
https://doi.org/10.1109/ACCESS.2025.3565279
Journal volume & issue
Vol. 13
pp. 76796 – 76810

Abstract

Read online

With the increasing accuracy of machine-learning models in recent years, explainable artificial intelligence (XAI), which allows for an understanding of the internal decisions made by these models, has become essential. However, many explanation methods are vulnerable to outliers and noise, and the results may be distorted by extreme values. This study devised a new method named HuberAIME, which is a variant of approximate inverse model explanations (AIME) and is robust to the Huber loss. HuberAIME limits the impact of outliers by weighting with iterative reweighted least squares and prevents the feature importance estimation of AIME from being degraded by extreme data points. Comparative experiments were conducted using the Wine dataset, which has almost no outliers, the Adult dataset, which contains extreme values, and the Statlog (German Credit) dataset, which has moderate outliers, to demonstrate the effectiveness of the proposed method. SHapley Additive exPlanations, AIME, and HuberAIME were evaluated using six metrics (explanatory accuracy, sparsity, stability, computational efficiency, robustness, and completeness). HuberAIME was equivalent to AIME on the Wine dataset. However, it outperformed AIME on the Adult dataset, exhibiting high fidelity and stability. On the Germain Credit dataset, AIME itself showed a certain degree of robustness, and there was no significant difference between AIME and HuberAIME. Overall, HuberAIME is useful for data that include serious outliers and maintains the same explanatory performance as AIME in cases of few outliers. Thus, HuberAIME is expected to improve the reliability of actual operations as a robust XAI method.

Keywords