IEEE Transactions on Neural Systems and Rehabilitation Engineering (Jan 2022)

An Interpretable Deep Learning Model for Speech Activity Detection Using Electrocorticographic Signals

  • Morgan Stuart,
  • Srdjan Lesaja,
  • Jerry J. Shih,
  • Tanja Schultz,
  • Milos Manic,
  • Dean J. Krusienski

DOI
https://doi.org/10.1109/TNSRE.2022.3207624
Journal volume & issue
Vol. 30
pp. 2783 – 2792

Abstract

Read online

Numerous state-of-the-art solutions for neural speech decoding and synthesis incorporate deep learning into the processing pipeline. These models are typically opaque and can require significant computational resources for training and execution. A deep learning architecture is presented that learns input bandpass filters that capture task-relevant spectral features directly from data. Incorporating such explainable feature extraction into the model furthers the goal of creating end-to-end architectures that enable automated subject-specific parameter tuning while yielding an interpretable result. The model is implemented using intracranial brain data collected during a speech task. Using raw, unprocessed timesamples, the model detects the presence of speech at every timesample in a causal manner, suitable for online application. Model performance is comparable or superior to existing approaches that require substantial signal preprocessing and the learned frequency bands were found to converge to ranges that are supported by previous studies.

Keywords