IEEE Access (Jan 2022)

Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions

  • Andreas Theissler,
  • Francesco Spinnato,
  • Udo Schlegel,
  • Riccardo Guidotti

DOI
https://doi.org/10.1109/ACCESS.2022.3207765
Journal volume & issue
Vol. 10
pp. 100700 – 100724

Abstract

Read online

Time series data is increasingly used in a wide range of fields, and it is often relied on in crucial applications and high-stakes decision-making. For instance, sensors generate time series data to recognize different types of anomalies through automatic decision-making systems. Typically, these systems are realized with machine learning models that achieve top-tier performance on time series classification tasks. Unfortunately, the logic behind their prediction is opaque and hard to understand from a human standpoint. Recently, we observed a consistent increase in the development of explanation methods for time series classification justifying the need to structure and review the field. In this work, we (a) present the first extensive literature review on Explainable AI (XAI) for time series classification, (b) categorize the research field through a taxonomy subdividing the methods into time points-based, subsequences-based and instance-based, and (c) identify open research directions regarding the type of explanations and the evaluation of explanations and interpretability.

Keywords