IEEE Access (Jan 2021)

Decoding Imagined Speech From EEG Using Transfer Learning

  • Jerrin Thomas Panachakel,
  • Ramakrishnan Angarai Ganesan

DOI
https://doi.org/10.1109/ACCESS.2021.3116196
Journal volume & issue
Vol. 9
pp. 135371 – 135383

Abstract

Read online

We present a transfer learning-based approach for decoding imagined speech from electroencephalogram (EEG). Features are extracted simultaneously from multiple EEG channels, rather than separately from individual channels. This helps in capturing the interrelationships between the cortical regions. To alleviate the problem of lack of enough data for training deep networks, sliding window-based data augmentation is performed. Mean phase coherence and magnitude-squared coherence, two popular measures used in EEG connectivity analysis, are used as features. These features are compactly arranged, exploiting their symmetry, to obtain a three dimensional “image-like” representation. The three dimensions of this matrix correspond to the alpha, beta and gamma EEG frequency bands. A deep network with ResNet50 as the base model is used for classifying the imagined prompts. The proposed method is tested on the publicly available ASU dataset of imagined speech EEG, comprising four different types of prompts. The accuracy of decoding the imagined prompt varies from a minimum of 79.7% for vowels to a maximum of 95.5% for short-long words across the various subjects. The accuracies obtained are better than the state-of-the-art methods, and the technique is good in decoding prompts of different complexities.

Keywords