Scientific Reports (Aug 2024)

Information based explanation methods for deep learning agents—with applications on large open-source chess models

  • Patrik Hammersborg,
  • Inga Strümke

DOI
https://doi.org/10.1038/s41598-024-70701-2
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 10

Abstract

Read online

Abstract With large chess-playing neural network models like AlphaZero contesting the state of the art within the world of computerised chess, two challenges present themselves: the question of how to explain the domain knowledge internalised by such models, and the problem that such models are not made openly available. This work presents the re-implementation of the concept detection methodology applied to AlphaZero, by using large, open-source chess models with comparable performance. We obtain results similar to those achieved when applying this methodology to AlphaZero, while relying solely on open-source resources. We also present a novel explainable AI (XAI) method, which is guaranteed to highlight exhaustively and exclusively the information used by the explained model. This method generates visual explanations tailored to domains characterised by discrete input spaces, as is the case for chess. Our presented method has the desirable property of controlling the information flow between any input vector and the given model, which in turn provides strict guarantees regarding what information is used by the trained model during inference. We demonstrate the viability of our method by applying it to standard $$8 \times 8$$ 8 × 8 chess, using large open-source chess models.