IEEE Access (Jan 2024)
Two-Phase Multitask Autoencoder-Based Deep Learning Framework for Subject-Independent EEG Motor Imagery Classification
Abstract
Electroencephalography (EEG)-based motor imagery (MI) has potential applications in diverse fields including rehabilitation, drone control, and virtual reality. However, its practical use is hindered by low generalization performance in decoding brain signals, primarily due to the subject-dependency of EEG signals. Although multitask autoencoder (MTAE) techniques have recently been used to mitigate this issue, these approaches encounter an imbalance problem between loss functions with different objectives, particularly between reconstruction loss and cross-entropy. To address this, we propose a novel two-phase multitask autoencoder (2PMTAE) framework that not only rectifies the imbalance issue but also ensures stable training of the MTAE. Our framework comprises two phases: first, the generation of a class-specific target signal, and second, the calculation of the reconstruction loss based on the generated target signals, effectively aligning the objectives of the two loss functions. In subject-independent experiments, our proposed method significantly outperformed state-of-the-art techniques, achieving accuracies of 71.68% and 75.78% on the BCI competition IV-2a and OpenBMI datasets, respectively. We also show that 2PMTAE is a generic framework for MI applications that can accept any encoder the practitioner wishes to employ. These results highlight the efficacy of our approach in enhancing the generalization performance of MI-EEG decoding.
Keywords