IEEE Access (Jan 2022)

Deep Learning Inspired Feature Engineering for Classifying Tremor Severity

  • Ahmed Al Taee,
  • Seyedehmarzieh Hosseini,
  • Rami N. Khushaba,
  • Tanveer Zia,
  • Chin-Teng Lin,
  • Adel Al-Jumaily

DOI
https://doi.org/10.1109/ACCESS.2022.3210344
Journal volume & issue
Vol. 10
pp. 105377 – 105386

Abstract

Read online

Bio-signals pattern recognition systems can be impacted by several factors with a potential to limit their associated performance and clinical translation. Among these factors, selecting the optimum feature extraction method, that can effectively exploit the interaction between the temporal and spatial information, is the most prominent. Despite the potential of deep learning (DL) models for extracting temporal, spatial, or temporal-spatial information, they are typically restricted by their need for a large amount of training data. The deep wavelet scattering transform (WST) is a relatively recent advancement within the DL literature to replace expensive convolution neural networks models with computationally less demanding methods. However, while some studies have used WST to extract features from biological signals, it has not been investigated before for electromyogram (EMG) and electroencephalogram (EEG) signals feature extraction. To investigate the hypothesis of the usefulness of WST for processing EMG and EEG signals, this study used a tremor dataset collected by the authors from people with tremor disorders. Specifically, the proposed work achieved three goals: (a) study the performance of extracting features from low-density EMG signals (8 channels), using the WST approach, (b) study the effect of extracting the features from high-density EEG signals (33 channels), using WST and study its robustness against changing the spatial and temporal aspects of classification accuracy, and (c) classify tremor severity using the WST method and compare the results with other well-known feature extraction approaches. The classification error rates were significantly reduced (maximum of nearly 12%) compared with other feature sets.

Keywords