IEEE Access (Jan 2024)

Toward Building Trust in Machine Learning Models: Quantifying the Explainability by SHAP and References to Human Strategy

  • Zhaopeng Li,
  • Mondher Bouazizi,
  • Tomoaki Ohtsuki,
  • Masakuni Ishii,
  • Eri Nakahara

DOI
https://doi.org/10.1109/ACCESS.2023.3347796
Journal volume & issue
Vol. 12
pp. 11010 – 11023

Abstract

Read online

Local model-agnostic Explainable Artificial Intelligence (XAI), such as LIME or SHAP, has recently gained popularity among researchers and data scientists for explaining black box Machine Learning (ML) models. In the industry, practitioners focus not only on how these explanations can validate their models but also on how they can help maintain trust from end-users. Some studies attempted to measure this ability by quantifying what they refer to as the explainability or interpretability of ML models. In this paper, we introduce a new method for measuring explainability with reference to an approximated human model. We develop a human-friendly interface to strategically collect human decision-making and translate it into a set of logical rules and intuitions, or simply annotations. These annotations are then compared with the local explanations derived from common XAI tools. Through a human survey, we demonstrate that it is possible to quantify human intuition and empirically compare it to a given explanation, enabling a practical quantification of explainability. By relying on this new method, we identified several potential flaws in today’s ML selection process. Furthermore, we demonstrate how our method can help to better evaluate ML models.

Keywords