Journal of Integrative Neuroscience (Dec 2024)

Motion Cognitive Decoding of Cross-Subject Motor Imagery Guided on Different Visual Stimulus Materials

  • Tian-jian Luo,
  • Jing Li,
  • Rui Li,
  • Xiang Zhang,
  • Shen-rui Wu,
  • Hua Peng

DOI
https://doi.org/10.31083/j.jin2312218
Journal volume & issue
Vol. 23, no. 12
p. 218

Abstract

Read online

Background: Motor imagery (MI) plays an important role in brain-computer interfaces, especially in evoking event-related desynchronization and synchronization (ERD/S) rhythms in electroencephalogram (EEG) signals. However, the procedure for performing a MI task for a single subject is subjective, making it difficult to determine the actual situation of an individual’s MI task and resulting in significant individual EEG response variations during motion cognitive decoding. Methods: To explore this issue, we designed three visual stimuli (arrow, human, and robot), each of which was used to present three MI tasks (left arm, right arm, and feet), and evaluated differences in brain response in terms of ERD/S rhythms. To compare subject-specific variations of different visual stimuli, a novel cross-subject MI-EEG classification method was proposed for the three visual stimuli. The proposed method employed a covariance matrix centroid alignment for preprocessing of EEG samples, followed by a model agnostic meta-learning method for cross-subject MI-EEG classification. Results and Conclusion: The experimental results showed that robot stimulus materials were better than arrow or human stimulus materials, with an optimal cross-subject motion cognitive decoding accuracy of 79.04%. Moreover, the proposed method produced robust classification of cross-subject MI-EEG signal decoding, showing superior results to conventional methods on collected EEG signals.

Keywords