Algorithms (May 2023)

Iterative Oblique Decision Trees Deliver Explainable RL Models

  • Raphael C. Engelhardt,
  • Marc Oedingen,
  • Moritz Lange,
  • Laurenz Wiskott,
  • Wolfgang Konen

DOI
https://doi.org/10.3390/a16060282
Journal volume & issue
Vol. 16, no. 6
p. 282

Abstract

Read online

The demand for explainable and transparent models increases with the continued success of reinforcement learning. In this article, we explore the potential of generating shallow decision trees (DTs) as simple and transparent surrogate models for opaque deep reinforcement learning (DRL) agents. We investigate three algorithms for generating training data for axis-parallel and oblique DTs with the help of DRL agents (“oracles”) and evaluate these methods on classic control problems from OpenAI Gym. The results show that one of our newly developed algorithms, the iterative training, outperforms traditional sampling algorithms, resulting in well-performing DTs that often even surpass the oracle from which they were trained. Even higher dimensional problems can be solved with surprisingly shallow DTs. We discuss the advantages and disadvantages of different sampling methods and insights into the decision-making process made possible by the transparent nature of DTs. Our work contributes to the development of not only powerful but also explainable RL agents and highlights the potential of DTs as a simple and effective alternative to complex DRL models.

Keywords