Frontiers in Neuroscience (Jul 2023)

Contrastive self-supervised representation learning without negative samples for multimodal human action recognition

  • Huaigang Yang,
  • Ziliang Ren,
  • Ziliang Ren,
  • Huaqiang Yuan,
  • Zhenyu Xu,
  • Jun Zhou

DOI
https://doi.org/10.3389/fnins.2023.1225312
Journal volume & issue
Vol. 17

Abstract

Read online

Action recognition is an important component of human-computer interaction, and multimodal feature representation and learning methods can be used to improve recognition performance due to the interrelation and complementarity between different modalities. However, due to the lack of large-scale labeled samples, the performance of existing ConvNets-based methods are severely constrained. In this paper, a novel and effective multi-modal feature representation and contrastive self-supervised learning framework is proposed to improve the action recognition performance of models and the generalization ability of application scenarios. The proposed recognition framework employs weight sharing between two branches and does not require negative samples, which could effectively learn useful feature representations by using multimodal unlabeled data, e.g., skeleton sequence and inertial measurement unit signal (IMU). The extensive experiments are conducted on two benchmarks: UTD-MHAD and MMAct, and the results show that our proposed recognition framework outperforms both unimodal and multimodal baselines in action retrieval, semi-supervised learning, and zero-shot learning scenarios.

Keywords