IEEE Access (Jan 2024)

DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis

  • Jiho Park,
  • Junghye Kim,
  • Yujung Gil,
  • Dongho Kim

DOI
https://doi.org/10.1109/ACCESS.2024.3351888
Journal volume & issue
Vol. 12
pp. 8780 – 8790

Abstract

Read online

The importance of a high-quality dataset availability in 3D human action analysis research cannot be overstated. This paper introduces DGU-HAO (Human Action analysis dataset with daily life Objects). This novel 3D human action multi-modality dataset encompasses four distinct data modalities accompanied by annotation data, including motion capture, RGB video, image, and 3D object modeling data. It features 63 action classes involving interactions with 60 common furniture and electronic devices. Each action class comprises approximately 1,000 motion capture data representing 3D skeleton data and corresponding RGB video and 3D object modeling data, resulting in 67,505 motion capture data samples. It offers comprehensive 3D structural information of the human, RGB images and videos, and point cloud data for 60 objects, collected through the participation of 126 subjects to ensure inclusivity and account for diverse human body types. To validate our dataset, we leveraged MMNet, a 3D human action recognition model, achieving Top-1 accuracy of 91.51% and 92.29% using the skeleton joint and bone methods, respectively. Beyond human action recognition, our versatile dataset is valuable for various 3D human action analysis research endeavors.

Keywords