IET Computer Vision (Oct 2018)

Multi‐modality‐based Arabic sign language recognition

  • Marwa Elpeltagy,
  • Moataz Abdelwahab,
  • Mohamed E. Hussein,
  • Amin Shoukry,
  • Asmaa Shoala,
  • Moustafa Galal

DOI
https://doi.org/10.1049/iet-cvi.2017.0598
Journal volume & issue
Vol. 12, no. 7
pp. 1031 – 1039

Abstract

Read online

With the increase in the number of deaf‐mute people in the Arab world and the lack of Arabic sign language (ArSL) recognition benchmark data sets, there is a pressing need for publishing a large‐volume and realistic ArSL data set. This study presents such a data set, which consists of 150 isolated ArSL signs. The data set is challenging due to the great similarity among hand shapes and motions in the collected signs. Along with the data set, a sign language recognition algorithm is presented. The authors’ proposed method consists of three major stages: hand segmentation, hand shape sequence and body motion description, and sign classification. The hand shape segmentation is based on the depth and position of the hand joints. Histograms of oriented gradients and principal component analysis are applied on the segmented hand shapes to obtain the hand shape sequence descriptor. The covariance of the three‐dimensional joints of the upper half of the skeleton in addition to the hand states and face properties are adopted for motion sequence description. The canonical correlation analysis and random forest classifiers are used for classification. The achieved accuracy is 55.57% over 150 ArSL signs, which is considered promising.

Keywords