IEEE Access (Jan 2024)

DDC3N: Doppler-Driven Convolutional 3D Network for Human Action Recognition

  • Mukhiddin Toshpulatov,
  • Wookey Lee,
  • Suan Lee,
  • Hoyoung Yoon,
  • U Kang Kang

DOI
https://doi.org/10.1109/ACCESS.2024.3422428
Journal volume & issue
Vol. 12
pp. 93546 – 93567

Abstract

Read online

In deep learning (DL)–based human action recognition (HAR), considerable strides have been undertaken. Nevertheless, the precise classification of sports athletes’ actions still needs to be completed. Primarily attributable to the exigency for exhaustive datasets about sports athletes’ actions and the enduring quandaries imposed by variable camera perspectives, mercurial lighting conditions, and occlusions. This investigative endeavor thoroughly examines extant HAR datasets, furnishing a yardstick for gauging the efficacy of cutting-edge methodologies. In light of the paucity of accessible datasets delineating athlete actions, we have taken a proactive stance, endeavoring to curate two meticulously datasets tailored explicitly for sports athletes, subsequently scrutinizing their consequential impact on performance enhancement. While the superiority of 3D convolutional neural networks (3DCNN) over graph convolutional networks (GCN) in HAR is evident, it must be acknowledged that they entail a considerable computational overhead, particularly when confronted with voluminous datasets. Our inquiry introduces innovative methodologies and a more resource-efficient remedy for HAR, thereby alleviating the computational strain on the 3DCNN architecture. Consequently, it proffers a multifaceted approach towards augmenting HAR within the purview of surveillance cameras, bridging lacunae, surmounting computational impediments, and effectuating significant strides in the accuracy and efficacy of HAR frameworks. GitHub link: https://github.com/muxiddin19/DDC3N-Doppler-Driven-C3D-Network-for-HAR

Keywords