Machine Learning and Knowledge Extraction (Nov 2021)

A Multi-Component Framework for the Analysis and Design of Explainable Artificial Intelligence

  • Mi-Young Kim,
  • Shahin Atakishiyev,
  • Housam Khalifa Bashier Babiker,
  • Nawshad Farruque,
  • Randy Goebel,
  • Osmar R. Zaïane,
  • Mohammad-Hossein Motallebi,
  • Juliano Rabelo,
  • Talat Syed,
  • Hengshuai Yao,
  • Peter Chun

DOI
https://doi.org/10.3390/make3040045
Journal volume & issue
Vol. 3, no. 4
pp. 900 – 921

Abstract

Read online

The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging and growing concern for creating ethical and trusted AI systems, including compliance with regulatory principles to ensure transparency and trust. These two threads have created a kind of “perfect storm” of research activity, all motivated to create and deliver any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science and which provides a basis for the development of a framework for transparent XAI. We identify four foundational components, including the requirements for (1) explicit explanation knowledge representation, (2) delivery of alternative explanations, (3) adjusting explanations based on knowledge of the explainee, and (4) exploiting the advantage of interactive explanation. With those four components in mind, we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a basic history of XAI ideas, and then synthesize those ideas into a simple framework that can guide the design of AI systems that require XAI.

Keywords