IET Computer Vision (Sep 2024)

Pruning‐guided feature distillation for an efficient transformer‐based pose estimation model

  • Dong‐hwi Kim,
  • Dong‐hun Lee,
  • Aro Kim,
  • Jinwoo Jeong,
  • Jong Taek Lee,
  • Sungjei Kim,
  • Sang‐hyo Park

DOI
https://doi.org/10.1049/cvi2.12277
Journal volume & issue
Vol. 18, no. 6
pp. 745 – 758

Abstract

Read online

Abstract The authors propose a compression strategy for a 3D human pose estimation model based on a transformer which yields high accuracy but increases the model size. This approach involves a pruning‐guided determination of the search range to achieve lightweight pose estimation under limited training time and to identify the optimal model size. In addition, the authors propose a transformer‐based feature distillation (TFD) method, which efficiently exploits the pose estimation model in terms of both model size and accuracy by leveraging transformer architecture characteristics. Pruning‐guided TFD is the first approach for 3D human pose estimation that employs transformer architecture. The proposed approach was tested on various extensive data sets, and the results show that it can reduce the model size by 30% compared to the state‐of‐the‐art while ensuring high accuracy.

Keywords