Sensors (Jun 2024)

Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model

  • Talal H. Noor,
  • Ayman Noor,
  • Ahmed F. Alharbi,
  • Ahmed Faisal,
  • Rakan Alrashidi,
  • Ahmed S. Alsaedi,
  • Ghada Alharbi,
  • Tawfeeq Alsanoosy,
  • Abdullah Alsaeedi

DOI
https://doi.org/10.3390/s24113683
Journal volume & issue
Vol. 24, no. 11
p. 3683

Abstract

Read online

Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived of services, especially in public places. This paper aims to address this gap in accessibility by leveraging technology to develop systems capable of recognizing Arabic Sign Language (ArSL) using deep learning techniques. In this paper, we propose a hybrid model to capture the spatio-temporal aspects of sign language (i.e., letters and words). The hybrid model consists of a Convolutional Neural Network (CNN) classifier to extract spatial features from sign language data and a Long Short-Term Memory (LSTM) classifier to extract spatial and temporal characteristics to handle sequential data (i.e., hand movements). To demonstrate the feasibility of our proposed hybrid model, we created a dataset of 20 different words, resulting in 4000 images for ArSL: 10 static gesture words and 500 videos for 10 dynamic gesture words. Our proposed hybrid model demonstrates promising performance, with the CNN and LSTM classifiers achieving accuracy rates of 94.40% and 82.70%, respectively. These results indicate that our approach can significantly enhance communication accessibility for the hearing-impaired community in Saudi Arabia. Thus, this paper represents a major step toward promoting inclusivity and improving the quality of life for the hearing impaired.

Keywords