IET Computer Vision (Dec 2015)

Trajectory‐based view‐invariant hand gesture recognition by fusing shape and orientation

  • Xingyu Wu,
  • Xia Mao,
  • Lijiang Chen,
  • Yuli Xue

DOI
https://doi.org/10.1049/iet-cvi.2014.0368
Journal volume & issue
Vol. 9, no. 6
pp. 797 – 805

Abstract

Read online

Traditional studies in vision‐based hand gesture recognition remain rooted in view‐dependent representations, and hence users are forced to be fronto‐parallel to the camera. To solve this problem, view‐invariant gesture recognition aims to make the recognition result independent of viewpoint changes. However, in current works the view‐invariance is achieved at the price of mixing different gesture patterns that have similar trajectory curve shape but different semantic meanings. For example, the gesture ‘push’ can be mistaken as ‘drag’ from another viewpoint. To address this shortcoming, in this study, the authors use a shape descriptor to extract the view‐invariant features of a three‐dimensional (3D) trajectory. As the shape features are invariant to omnidirectional viewpoint changes, the orientation features are then added into weight different rotation angles so that similar trajectory shapes are better separated. The proposed method was conducted on two different databases, including a popular Australian Sign Language database and a challenging Kinect Hand Trajectory database. Experimental results show that the proposed algorithm achieves a higher average recognition rate than the state‐of‐the‐art approaches, and can better distinguish confusing gestures while meeting the view‐invariant condition.

Keywords