Sensors (Sep 2021)

CrossFuNet: RGB and Depth Cross-Fusion Network for Hand Pose Estimation

  • Xiaojing Sun,
  • Bin Wang,
  • Longxiang Huang,
  • Qian Zhang,
  • Sulei Zhu,
  • Yan Ma

DOI
https://doi.org/10.3390/s21186095
Journal volume & issue
Vol. 21, no. 18
p. 6095

Abstract

Read online

Despite recent successes in hand pose estimation from RGB images or depth maps, inherent challenges remain. RGB-based methods suffer from heavy self-occlusions and depth ambiguity. Depth sensors rely heavily on distance and can only be used indoors, thus there are many limitations to the practical application of depth-based methods. The aforementioned challenges have inspired us to combine the two modalities to offset the shortcomings of the other. In this paper, we propose a novel RGB and depth information fusion network to improve the accuracy of 3D hand pose estimation, which is called CrossFuNet. Specifically, the RGB image and the paired depth map are input into two different subnetworks, respectively. The feature maps are fused in the fusion module in which we propose a completely new approach to combine the information from the two modalities. Then, the common method is used to regress the 3D key-points by heatmaps. We validate our model on two public datasets and the results reveal that our model outperforms the state-of-the-art methods.

Keywords