Revista Brasileira de Computação Aplicada (Jan 2020)

A study about Explainable Articial Intelligence: using decision tree to explain SVM

  • Carla Piazzon Ramos Vieira,
  • Luciano Antonio Digiampietri

DOI
https://doi.org/10.5335/rbca.v12i1.10247
Journal volume & issue
Vol. 12, no. 1
pp. 113 – 121

Abstract

Read online

The technologies supporting Artificial Intelligence (AI) have advanced rapidly over the past few years and AI is becoming a commonplace in every aspect of life like the future of self-driving cars or earlier health diagnosis. For this to occur shortly, the entire community stands in front of the barrier of explainability, an inherent problem of latest models (e.g. Deep Neural Networks) that were not present in the previous hype of AI (linear and rule-based models). Most of these recent models are used as black boxes without understanding partially or even completely how different features influence the model prediction avoiding algorithmic transparency. In this paper, we focus on how much we can understand the decisions made by an SVM Classifier in a post-hoc model agnostic approach. Furthermore, we train a tree-based model (inherently interpretable) using labels from the SVM, called secondary training data to provide explanations and compare permutation importance method to the more commonly used measures such as accuracy and show that our methods are both more reliable and meaningful techniques to use. We also outline the main challenges for such methods and conclude that model-agnostic interpretability is a key component in making machine learning more trustworthy.

Keywords