IEEE Access (Jan 2024)

Temporal Shift Module-Based Vision Transformer Network for Action Recognition

  • Kunpeng Zhang,
  • Mengyan Lyu,
  • Xinxin Guo,
  • Liye Zhang,
  • Cong Liu

DOI
https://doi.org/10.1109/ACCESS.2024.3379885
Journal volume & issue
Vol. 12
pp. 47246 – 47257

Abstract

Read online

This paper introduces a novel action recognition model named ViT-Shift, which combines the Time Shift Module (TSM) with the Vision Transformer (ViT) architecture. Traditional video action recognition tasks face significant computational challenges, requiring substantial computing resources. However, our model successfully addresses this issue by incorporating the TSM, achieving outstanding performance while significantly reducing computational costs. Our approach is based on the latest Transformer self-attention mechanism, applied to video sequence processing instead of traditional convolutional methods. To preserve the core architecture of ViT and transfer its excellent performance in image recognition to video action recognition, we strategically introduce the TSM only before the multi-head attention layer of ViT. This design allows us to simulate temporal interactions using channel shifts, effectively reducing computational complexity. We carefully design the position and shift parameters of the TSM to maximize the model’s performance. Experimental results demonstrate that ViT-Shift achieves remarkable results on two standard action recognition datasets. With ImageNet-21K pretraining, we achieve an accuracy of 77.55% on the Kinetics-400 dataset and 93.07% on the UCF-101 dataset.

Keywords