Applied Sciences (May 2019)

Deep Forest-Based Monocular Visual Sign Language Recognition

  • Qifan Xue,
  • Xuanpeng Li,
  • Dong Wang,
  • Weigong Zhang

DOI
https://doi.org/10.3390/app9091945
Journal volume & issue
Vol. 9, no. 9
p. 1945

Abstract

Read online

Sign language recognition (SLR) is a bridge linking the hearing impaired and the general public. Some SLR methods using wearable data gloves are not portable enough to provide daily sign language translation service, while visual SLR is more flexible to work with in most scenes. This paper introduces a monocular vision-based approach to SLR. Human skeleton action recognition is proposed to express semantic information, including the representation of signs’ gestures, using the regularization of body joint features and a deep-forest-based semantic classifier with a voting strategy. We test our approach on the public American Sign Language Lexicon Video Dataset (ASLLVD) and a private testing set. It proves to achieve a promising performance and shows a high generalization capability on the testing set.

Keywords