IEEE Access (Jan 2021)
3D Hand Pose Estimation via Graph-Based Reasoning
Abstract
Hand pose estimation from a single depth image has recently received significant attention owing to its importance in many applications requiring human-computer interaction. The rapid progress of convolutional neural networks (CNNs) and technological advances in low-cost depth cameras have greatly improved the performance of the hand pose estimation method. Nevertheless, regressing joint coordinates is still a challenging task due to joint flexibility and self-occlusion. Previous hand pose estimation methods have limitations in relying on a deep and complex network structure without fully utilizing hand joint connections. A hand is an articulated object and consists of six parts that represent the palm and five fingers. The kinematic constraints can be obtained by modeling the dependency between adjacent joints. This paper proposes a novel CNN-based approach incorporating hand joint connections to features through both a global relation inference for the entire hand and local relation inference for each finger. Modeling the relations between the hand joints can alleviate critical problems for occlusion and self-similarity. We also present a hierarchical structure with six branches that independently estimate the position of the palm and five fingers by adding hand connections of each joint using graph reasoning based on graph convolutional networks. Experimental results on public hand pose datasets show that the proposed method outperforms previous state-of-the-art methods. Specifically, our method achieves the best accuracy compared to state-of-the-art methods on public datasets. In addition, the proposed method can be utilized for real-time applications with an execution speed of 103 fps in a single GPU environment.
Keywords