IEEE Access (Jan 2021)

Advanced TSGL-EEGNet for Motor Imagery EEG-Based Brain-Computer Interfaces

  • Xin Deng,
  • Boxian Zhang,
  • Nian Yu,
  • Ke Liu,
  • Kaiwei Sun

DOI
https://doi.org/10.1109/ACCESS.2021.3056088
Journal volume & issue
Vol. 9
pp. 25118 – 25130

Abstract

Read online

Deep learning technology is rapidly spreading in recent years and has been extensive attempts in the field of Brain-Computer Interface (BCI). Though the accuracy of Motor Imagery (MI) BCI systems based on the deep learning have been greatly improved compared with some traditional algorithms, it is still a big problem to clearly interpret the deep learning models. To address the issues, this work first introduces a popular deep learning model EEGNet and compares it with the traditional algorithm Filter-Bank Common Spatial Pattern (FBCSP). After that, this work considers that the 1-D convolution of EEGNet can be explained by a special Discrete Wavelet Transform (DWT), and the depthwise convolution of EEGNet is similar to the Common Spatial Pattern (CSP) algorithm. Therefore, this work improves the EEGNet by using the algorithm Temporary Constrained Sparse Group Lasso (TCSGL) to enhance its performance. The proposed model TSGL-EEGNet is tested on the BCI Competition IV 2a and BCI Competition III IIIa datasets that both are 4-classes classification MI tasks. The testing results show that the proposed model has achieved 78.96% (0.7194) average classification accuracy (kappa) on the dataset BCI Competition IV 2a, which are greater than EEGNet, C2CM, MB3DCNN, SS-MEMDBF and FBCSP, especially on insensitive subjects. The proposed model has also achieved 85.30% (0.8040) average classification accuracy (kappa) on the dataset BCI Competition III IIIa, which are greater than the EEGNet, MFTFS et al. At last, this work uses average-validation and stacking to further enhance the effect of the model. The 4-classes classification average accuracy rates reach 81.34% and 88.89%, and the kappas reach 0.7511 and 0.8519 on dataset BCI Competition IV 2a and BCI Competition III IIIa, respectively. Additionally, this work also uses the Grad-CAM to visualize the frequency and spatial features that are learned by the neural network.

Keywords