Applied AI Letters (Dec 2021)

Reframing explanation as an interactive medium: The EQUAS (Explainable QUestion Answering System) project

  • William Ferguson,
  • Dhruv Batra,
  • Raymond Mooney,
  • Devi Parikh,
  • Antonio Torralba,
  • David Bau,
  • David Diller,
  • Josh Fasching,
  • Jaden Fiotto‐Kaufman,
  • Yash Goyal,
  • Jeff Miller,
  • Kerry Moffitt,
  • Alex Montes de Oca,
  • Ramprasaath R. Selvaraju,
  • Ayush Shrivastava,
  • Jialin Wu,
  • Stefan Lee

DOI
https://doi.org/10.1002/ail2.60
Journal volume & issue
Vol. 2, no. 4
pp. n/a – n/a

Abstract

Read online

Abstract This letter is a retrospective analysis of our team's research for the Defense Advanced Research Projects Agency Explainable Artificial Intelligence project. Our initial approach was to use salience maps, English sentences, and lists of feature names to explain the behavior of deep‐learning‐based discriminative systems, with particular focus on visual question answering systems. We found that presenting static explanations along with answers led to limited positive effects. By exploring various combinations of machine and human explanation production and consumption, we evolved a notion of explanation as an interactive process that takes place usually between humans and artificial intelligence systems but sometimes within the software system. We realized that by interacting via explanations people could task and adapt machine learning (ML) agents. We added affordances for editing explanations and modified the ML system to act in accordance with the edits to produce an interpretable interface to the agent. Through this interface, editing an explanation can adapt a system's performance to new, modified purposes. This deep tasking, wherein the agent knows its objective and the explanation for that objective, will be critical to enable higher levels of autonomy.

Keywords