IEEE Access (Jan 2024)

Three-Branch Temporal-Spatial Convolutional Transformer for Motor Imagery EEG Classification

  • Weiming Chen,
  • Yiqing Luo,
  • Jie Wang

DOI
https://doi.org/10.1109/ACCESS.2024.3405652
Journal volume & issue
Vol. 12
pp. 79754 – 79764

Abstract

Read online

In the classification of motor imagery Electroencephalogram (MI-EEG) signals through deep learning models, challenges such as the insufficiency of feature extraction due to the limited receptive field of single-scale convolutions, and overfitting due to small training sets, can hinder the perception of global dependencies in EEG signals. In this paper, we introduce a network called EEG TBTSCTnet, which represents Three-Branch Temporal-Spatial Convolutional Transformer. This approach expands the size of the training set through Data Augmentation, and then combines local and global features for classification. Specifically, Data Augmentation aims to mitigate the overfitting issue, whereas the Three-Branch Temporal-Spatial Convolution module captures a broader range of multi-scale, low-level local information in EEG signals more effectively than conventional CNNs. The Transformer Encoder module is directly connected to extract global correlations within local temporal-spatial features, utilizing the multi-head attention mechanism to effectively enhance the network’s ability to represent relevant EEG signal features. Subsequently, a classifier module based on fully connected layers is used to predict the categories of EEG signals. Finally, extensive experiments were conducted on two public MI-EEG datasets to evaluate the proposed method. The study also allowed for an optimal selection of channels to balance accuracy and cost through weight visualization.

Keywords