IEEE Access (Jan 2022)

ViGAT: Bottom-Up Event Recognition and Explanation in Video Using Factorized Graph Attention Network

  • Nikolaos Gkalelis,
  • Dimitrios Daskalakis,
  • Vasileios Mezaris

DOI
https://doi.org/10.1109/ACCESS.2022.3213652
Journal volume & issue
Vol. 10
pp. 108797 – 108816

Abstract

Read online

In this paper a pure-attention bottom-up approach, called ViGAT, that utilizes an object detector together with a Vision Transformer (ViT) backbone network to derive object and frame features, and a head network to process these features for the task of event recognition and explanation in video, is proposed. The ViGAT head consists of graph attention network (GAT) blocks factorized along the spatial and temporal dimensions in order to capture effectively both local and long-term dependencies between objects or frames. Moreover, using the weighted in-degrees (WiDs) derived from the adjacency matrices at the various GAT blocks, we show that the proposed architecture can identify the most salient objects and frames that explain the decision of the network. A comprehensive evaluation study is performed, demonstrating that the proposed approach provides state-of-the-art results on three large, publicly available video datasets (FCVID, MiniKinetics, ActivityNet). Source code is made publicly available at: https://github.com/bmezaris/ViGAT

Keywords