IEEE Access (Jan 2022)

An Interpretable Deep Learning Classifier for Epileptic Seizure Prediction Using EEG Data

  • Imene Jemal,
  • Neila Mezghani,
  • Lina Abou-Abbas,
  • Amar Mitiche

DOI
https://doi.org/10.1109/ACCESS.2022.3176367
Journal volume & issue
Vol. 10
pp. 60141 – 60150

Abstract

Read online

Deep learning has served pattern classification in many applications, with a performance which often well exceeds that of other machine learning paradigms. Yet, in general, deep learning has used computational architectures built, albeit partially, by ad hoc means, and its classification decisions are not necessarily interpretable in terms of knowledge relevant to the application it serves. This is often referred to as the black box problem, which in certain applications, such as epileptic seizure prediction, can be a serious impediment. The purpose of this study is to investigate an interpretable deep learning classifier for epileptic EEG-driven seizure prediction. This neural network is interpretable because its layers can be visualized and interpreted as a result of a novel architecture where the learned weights follow from signal processing computations such as frequency sub-band and spatial filters. Consequently, the extracted features are no longer abstract as they correspond to the features commonly used for decoding EEG data. In addition, the network uses layer-wise relevance propagation to reveal pertinent features which can further explain the computations leading to the decisions. In seizure prediction experiments using the CHB-MIT data set, the method produced classification results which improved on the state-of-the art, with first network layer filters corresponding to clinically relevant frequency bands, and the input channels in the brain location in which the seizure originates contributing most significantly to the network predictions.

Keywords