Applied Sciences (Aug 2022)

GLFormer: Global and Local Context Aggregation Network for Temporal Action Detection

  • Yilong He,
  • Yong Zhong,
  • Lishun Wang,
  • Jiachen Dang

DOI
https://doi.org/10.3390/app12178557
Journal volume & issue
Vol. 12, no. 17
p. 8557

Abstract

Read online

As the core component of video analysis, Temporal Action Localization (TAL) has experienced remarkable success. However, some issues are not well addressed. First, most of the existing methods process the local context individually, without explicitly exploiting the relations between features in an action instance as a whole. Second, the duration of different actions varies widely; thus, it is difficult to choose the proper temporal receptive field. To address these issues, this paper proposes a novel network, GLFormer, which can aggregate short, medium, and long temporal contexts. Our method consists of three independent branches with different ranges of attention, and these features are then concatenated along the temporal dimension to obtain richer features. One is multi-scale local convolution (MLC), which consists of multiple 1D convolutions with varying kernel sizes to capture the multi-scale context information. Another is window self-attention (WSA), which tries to explore the relationship between features within the window range. The last is global attention (GA), which is used to establish long-range dependencies across the full sequence. Moreover, we design a feature pyramid structure to be compatible with action instances of various durations. GLFormer achieves state-of-the-art performance on two challenging video benchmarks, THUMOS14 and ActivityNet 1.3. Our performance is 67.2% and 54.5% [email protected] on the datasets THUMOS14 and ActivityNet 1.3, respectively.

Keywords