IET Computer Vision (Jun 2023)

Video2mesh: 3D human pose and shape recovery by a temporal convolutional transformer network

  • Xianjin Chao,
  • Zhipeng Ge,
  • Howard Leung

DOI
https://doi.org/10.1049/cvi2.12172
Journal volume & issue
Vol. 17, no. 4
pp. 379 – 388

Abstract

Read online

Abstract From a 2D video of a person in action, human mesh recovery aims to infer the 3D human pose and shape frame by frame. Despite progress on video‐based human pose and shape estimation, it is still challenging to guarantee high accuracy and smoothness simultaneously. To tackle this problem, we propose a Video2mesh, a temporal convolutional transformer (TConvTransformer) based temporal network which is able to recover accurate and smooth human mesh from 2D video. The temporal convolution block achieves the sequence‐level smoothness by aggregating image features from adjacent frames. The subsequent multi‐attention transformer improves the accuracy due to its multi‐subspace for better middle‐frame feature representation. Meanwhile, we add a TConvTransformer discriminator which is trained together with our 3D human mesh temporal encoder. This TConvTransformer discriminator further improves the accuracy and smoothness by restricting the pose and shape in a more reliable space based on the AMASS dataset. We conduct extensive experiments on three standard benchmark datasets and show that our proposed Video2mesh outperforms other state‐of‐the‐art methods in both accuracy and smoothness.

Keywords