IET Image Processing (Apr 2022)

A graph convolutional neural network model with Fisher vector encoding and channel‐wise spatial‐temporal aggregation for skeleton‐based action recognition

  • Jun Tang,
  • Yanjiang Wang,
  • Sichao Fu,
  • Baodi Liu,
  • Weifeng Liu

DOI
https://doi.org/10.1049/ipr2.12422
Journal volume & issue
Vol. 16, no. 5
pp. 1433 – 1443

Abstract

Read online

Abstract Skeleton‐based action recognition is an inspired yet challenging task in computer vision. Recently, the latest graph convolutional network (GCN), which generalises well‐established convolutional neural networks to non‐Euclidean structures, is proven to be highly successful for action recognition from body skeleton data. However, the GCN architecture has not been fully studied. In this work, a Fisher vector (FV) encoding based GCN architecture (FV‐GCN) is proposed, which exceeds the limitations of existing GCN‐based methods by combining the GCN model with FV encoding. A channel‐wise spatial–temporal aggregation function to preserve spatial–temporal information in the whole action clip and integrate it into the FV‐GCN architecture is also presented. Since FV is different from the GCN structure, this hybrid architecture that incorporates the advantages of both algorithms can discover complementary information of feature representation effectively. On two challenging human action datasets, kinetics, and NTU‐RGBD, improved performance is demonstrated over the baseline method, and the FV‐GCN is better or comparable to some state‐of‐the‐art methods.