Frontiers in Computer Science (May 2023)

Evaluating machine-generated explanations: a “Scorecard” method for XAI measurement science

  • Robert R. Hoffman,
  • Mohammadreza Jalaeian,
  • Connor Tate,
  • Gary Klein,
  • Shane T. Mueller

DOI
https://doi.org/10.3389/fcomp.2023.1114806
Journal volume & issue
Vol. 5

Abstract

Read online

IntroductionMany Explainable AI (XAI) systems provide explanations that are just clues or hints about the computational models-Such things as feature lists, decision trees, or saliency images. However, a user might want answers to deeper questions such as How does it work?, Why did it do that instead of something else? What things can it get wrong? How might XAI system developers evaluate existing XAI systems with regard to the depth of support they provide for the user's sensemaking? How might XAI system developers shape new XAI systems so as to support the user's sensemaking? What might be a useful conceptual terminology to assist developers in approaching this challenge?MethodBased on cognitive theory, a scale was developed reflecting depth of explanation, that is, the degree to which explanations support the user's sensemaking. The seven levels of this scale form the Explanation Scorecard.Results and discussionThe Scorecard was utilized in an analysis of recent literature, showing that many systems still present low-level explanations. The Scorecard can be used by developers to conceptualize how they might extend their machine-generated explanations to support the user in developing a mental model that instills appropriate trust and reliance. The article concludes with recommendations for how XAI systems can be improved with regard to the cognitive considerations, and recommendations regarding the manner in which results on the evaluation of XAI systems are reported.

Keywords