Philosophies (Mar 2022)

On Explainable AI and Abductive Inference

  • Kyrylo Medianovskyi,
  • Ahti-Veikko Pietarinen

DOI
https://doi.org/10.3390/philosophies7020035
Journal volume & issue
Vol. 7, no. 2
p. 35

Abstract

Read online

Modern explainable AI (XAI) methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning (ML) algorithms perform genuinely abductive inferences. The paper outlines the key predicament in the current inductive paradigm of ML and the associated XAI techniques, and sketches the desiderata for a truly participatory, second-generation XAI, which is endowed with abduction.

Keywords