Applied Sciences (Apr 2024)

A Performance Comparison of Japanese Sign Language Recognition with ViT and CNN Using Angular Features

  • Tamon Kondo,
  • Sakura Narumi,
  • Zixun He,
  • Duk Shin,
  • Yousun Kang

DOI
https://doi.org/10.3390/app14083228
Journal volume & issue
Vol. 14, no. 8
p. 3228

Abstract

Read online

In recent years, developments in deep learning technology have driven significant advancements in research aimed at facilitating communication with individuals who have hearing impairments. The focus has been on enhancing automatic recognition and translation systems for sign language. This study proposes a novel approach using a vision transformer (ViT) for recognizing Japanese Sign Language. Our method employs a pose estimation library, MediaPipe, to extract the positional coordinates of each finger joint within video frames and generate one-dimensional angular feature data from these coordinates. Then, the code arranges these feature data in a temporal sequence to form a two-dimensional input vector for the ViT model. To determine the optimal configuration, this study evaluated recognition accuracy by manipulating the number of encoder layers within the ViT model and compared against traditional convolutional neural network (CNN) models to evaluate its effectiveness. The experimental results showed 99.7% accuracy for the method using the ViT model and 99.3% for the results using the CNN. We demonstrated the efficacy of our approach through real-time recognition experiments using Japanese sign language videos.

Keywords