Human-Centric Intelligent Systems (Apr 2024)

Evaluating the Usefulness of Counterfactual Explanations from Bayesian Networks

  • Raphaela Butz,
  • Arjen Hommersom,
  • Renée Schulz,
  • Hans van Ditmarsch

DOI
https://doi.org/10.1007/s44230-024-00066-2
Journal volume & issue
Vol. 4, no. 2
pp. 286 – 298

Abstract

Read online

Abstract Bayesian networks are commonly used for learning with uncertainty and incorporating expert knowledge. However, they are hard to interpret, especially when the network structure is complex. Methods used to explain Bayesian networks operate under certain assumptions about what constitutes the best explanation, without actually verifying these assumptions. One such common assumption is that a shorter length of the causal chain of one variable to another enhances its explanatory strength. Counterfactual explanations gained popularity in artificial intelligence over the last years. It is well-known that it is possible to generate counterfactuals from causal Bayesian networks, but there is no indication which of them are useful for explanatory purposes. In this paper, we examine how to apply findings from psychology to search for counterfactuals that are perceived as more useful explanations for the end user. For this purpose, we have conducted a questionnaire to test whether counterfactuals that change an actionable cause are considered more useful than counterfactuals that change a direct cause. The results of the questionnaire indicate that actionable counterfactuals are preferred regardless of being the direct cause or having a longer causal chain.

Keywords