IEEE Access (Jan 2025)
A Spatiotemporal Feature Extraction Technique Using Superlet-CNN Fusion for Improved Motor Imagery Classification
Abstract
In the realm of Brain-Computer Interface (BCI) research, the precise decoding of motor imagery electroencephalogram (MI-EEG) signals is pivotal for the realization of systems that can be seamlessly integrated into practical applications, enhancing the autonomy of individuals with mobility impairments. This study presents an enhanced method for the precise recognition of MI tasks using EEG data, to facilitate more intuitive interactions between individuals with mobility challenges and their environment. The core challenge addressed herein is the development of robust algorithms that enable the accurate identification of MI tasks, thereby empowering individuals with mobility impairments to control devices and interfaces through cognitive commands. Although there are many different methods for analyzing MI-EEG signals, research into deep learning and transfer learning approaches for MI-EEG analysis remains scarce. This research leverages the superlet transform (SLT) to transform EEG signals into a two-dimensional (2-D) high-resolution spectral representation. This 2-D representation of segmented MI-EEG signals is then processed through an adapted pretrained residual network, which classifies the MI-EEG signals. The effectiveness of the suggested technique is evident as the achieved classification accuracy is 99.9% for binary tasks and 96.4% for multi-class tasks, representing a significant advancement over existing methods. Through an intensive comparison with present algorithms assessed in variety of performance evaluating metrics the present study emphasize the exceptional ability of proposed approach to accurately classify the different MI categories from the EEG signals and which is a great contribution to the field of BCI research field.
Keywords