IEEE Access (Jan 2019)

Universal Joint Feature Extraction for P300 EEG Classification Using Multi-Task Autoencoder

  • Apiwat Ditthapron,
  • Nannapas Banluesombatkul,
  • Sombat Ketrat,
  • Ekapol Chuangsuwanich,
  • Theerawit Wilaiprasitporn

DOI
https://doi.org/10.1109/ACCESS.2019.2919143
Journal volume & issue
Vol. 7
pp. 68415 – 68428

Abstract

Read online

The process of recording electroencephalography (EEG) signals is onerous and requires massive storage to store signals at an applicable frequency rate. In this paper, we propose the event-related potential encoder network (ERPENet), a multi-task autoencoder-based model, that can be applied to any ERP-related tasks. The strength of ERPENet lies in its capability to handle various kinds of ERP datasets and its robustness across multiple recording setups, enabling joint training across datasets. The ERPENet incorporates convolutional neural networks (CNNs) and long short-term memory (LSTM), in an autoencoder setup, which tries to simultaneously compress the input EEG signal and extract related P300 features into a latent vector. Here, we can infer the process for generating the latent vector as universal joint feature extraction. The network also includes a classification part for attended and unattended events classification as an auxiliary task. We experimented on six different P300 datasets. The results show that the latent vector exhibits better compression capability than the previous state-of-the-art semi-supervised autoencoder model. For attended and unattended events classification, pre-trained weights are adopted as initial weights and tested on unseen P300 datasets to evaluate the adaptability of the model, which shortens the training process as compared to using random Xavier weight initialization. At the compression rate of 6.84, the classification accuracy outperforms conventional P300 classification models: the XdawnLDA, DeepConvNet, and EEGNet achieving 79.37%- 88.52% classification accuracy depending on the dataset.

Keywords