IEEE Access (Jan 2024)

A Methodological and Structural Review of Hand Gesture Recognition Across Diverse Data Modalities

  • Jungpil Shin,
  • Abu Saleh Musa Miah,
  • Md. Humaun Kabir,
  • Md. Abdur Rahim,
  • Abdullah Al Shiam

DOI
https://doi.org/10.1109/ACCESS.2024.3456436
Journal volume & issue
Vol. 12
pp. 142606 – 142639

Abstract

Read online

Researchers have been developing Hand Gesture Recognition (HGR) systems to enhance natural, efficient, and authentic human-computer interaction, especially benefiting those who rely solely on hand gestures for communication. Despite significant progress, automatic and precise identification of hand gestures remains a considerable challenge in computer vision. Recent studies have focused on specific modalities like RGB images, skeleton data, and spatiotemporal interest points. This paper comprehensively reviews HGR techniques and data modalities from 2014 to 2024, exploring advancements in sensor technology and computer vision. We highlight accomplishments using various modalities, including RGB, Skeleton, Depth, Audio, Electromyography (EMG), Electroencephalography (EEG), and Multimodal approaches and identify areas needing further research. We reviewed over 250 articles from prominent databases, focusing on data collection, data settings, and gesture representation. Our review assesses the efficacy of HGR systems through their recognition accuracy and identifies a gap in research on continuous gesture recognition, indicating the need for improved vision-based gesture systems. The field has experienced steady research progress, including advancements in hand-crafted features and deep learning (DL) techniques. Additionally, we report on the promising developments in HGR methods and the area of multimodal approaches. We hope this survey will serve as a potential guideline for diverse data modality-based HGR research.

Keywords