IEEE Transactions on Neural Systems and Rehabilitation Engineering (Jan 2024)

A Novel Multi-Feature Fusion Network With Spatial Partitioning Strategy and Cross-Attention for Armband-Based Gesture Recognition

  • Fo Hu,
  • Mengyuan Qian,
  • Kailun He,
  • Wen-An Zhang,
  • Xusheng Yang

DOI
https://doi.org/10.1109/TNSRE.2024.3487216
Journal volume & issue
Vol. 32
pp. 3878 – 3890

Abstract

Read online

Effectively integrating the time-space-frequency information of multi-modal signals from armband sensor, including surface electromyogram (sEMG) and accelerometer data, is critical for accurate gesture recognition. Existing approaches often neglect the abundant spatial relationships inherent in multi-channel sEMG signals obtained via armband sensors and face challenges in harnessing the correlations across multiple feature domains. To address this issue, we propose a novel multi-feature fusion network with spatial partitioning strategy and cross-attention (MFN-SPSCA) to improve the accuracy and robustness of gesture recognition. Specifically, a spatiotemporal graph convolution module with a spatial partitioning strategy is designed to capture potential spatial feature of multi-channel sEMG signals. Additionally, we design a cross-attention fusion module to learn and prioritize the importance and correlation of multi-feature domain. Extensive experiment demonstrate that the MFN-SPSCA method outperforms other state-of-the-art methods on self-collected dataset and the Ninapro DB5 dataset. Our work addresses the challenge of recognizing gestures from the multi-modal data collected by armband sensor, emphasizing the importance of integrating time-space-frequency information. Codes are available at https://github.com/ZJUTofBrainIntelligence/MFN-SPSCA.

Keywords