IEEE Access (Jan 2019)

TSViz: Demystification of Deep Learning Models for Time-Series Analysis

  • Shoaib Ahmed Siddiqui,
  • Dominique Mercier,
  • Mohsin Munir,
  • Andreas Dengel,
  • Sheraz Ahmed

DOI
https://doi.org/10.1109/ACCESS.2019.2912823
Journal volume & issue
Vol. 7
pp. 67027 – 67040

Abstract

Read online

This paper presents a novel framework for the demystification of convolutional deep learning models for time-series analysis. This is a step toward making informed/explainable decisions in the domain of time series, powered by deep learning. There have been numerous efforts to increase the interpretability of image-centric deep neural network models, where the learned features are more intuitive to visualize. Visualization in the domain of time series is significantly challenging, as there is no direct interpretation of the filters and inputs compared with the imaging modality. In addition, a little or no concentration has been devoted to the development of such tools in the domain of time series in the past. TSViz provides possibilities to explore and analyze the network from different dimensions at different levels of abstraction, which includes the identification of the parts of the input that were responsible for a particular prediction (including per filter saliency), importance of the different filters present in the network, notion of diversity present in the network through filter clustering, understanding of the main sources of variation learned by the network through inverse optimization, and analysis of the network's robustness against adversarial noise. As a sanity check for the computed influence values, we demonstrate our results on pruning of neural networks based on the computed influence information. These representations allow the user to better understand the network so that the acceptability of these deep models for time-series analysis can be enhanced. This is extremely important in domains, such as finance, industry 4.0, self-driving cars, health care, and counter-terrorism, where reasons for reaching a particular prediction are equally important as the prediction itself. We assess the proposed framework for interpretability with a set of desirable properties essential for any method in this direction.

Keywords