IEEE Access (Jan 2023)

Parallel Multi-Head Graph Attention Network (PMGAT) Model for Human-Object Interaction Detection

  • Jiali Zhang,
  • Zuriahati Mohd Yunos,
  • Habibollah Haron

DOI
https://doi.org/10.1109/ACCESS.2023.3335193
Journal volume & issue
Vol. 11
pp. 131708 – 131725

Abstract

Read online

Human-object interaction (HOI) detection is an advanced task in the field of computer vision and is crucial for deep scene understanding. However, current HOI detection models face serious challenges in the following aspects: first, they overly rely on appearance features and neglect the local details of human-object interactions; second, the training cost of the existing detection model is quite high. To overcome these challenges, this study proposes a Parallel Multi-Head Graph Attention Network (PMGAT) model for detecting human-object interaction correlations. First, the close relationship between facial landmarks and body keypoints with objects is recognized, thereby introducing a local feature module to construct a relational graph model between facial keypoints, body keypoints, and objects. A multi-head graph attention network was utilized to accurately capture the interaction correlations between keypoints, addressing the issue of neglecting local details. Furthermore, the global feature module is designed to extract absolute spatial pose features and relative spatial pose features based on the positions of human keypoints relative to objects, enabling a more in-depth extraction of interactions between humans and objects. To reduce the training cost of the model, it adopts a multi-branch parallel structure and employs a multi-threaded multi-GPU scheme for parallel training acceleration. The empirical results demonstrate that the PMGAT model outperforms the current state-of-the-art ViPLO method in terms of mAP on the V-COCO and HICO-DET datasets. On V-COCO, it exhibits a notable improvement of up to 0.8% mAP over ViPLO, while on the more demanding HICO-DET, the improvement reaches up to 1.47% mAP. Furthermore, PMGAT stands out for its minimal training time compared to existing approaches. Overall, these results corroborate the dual augmentation of PMGAT in accuracy and training efficiency.

Keywords