Data in Brief (Apr 2024)
Multimodal human motion dataset of 3D anatomical landmarks and pose keypoints
Abstract
In this paper, we present a dataset that takes 2D and 3D human pose keypoints estimated from images and relates them to the location of 3D anatomical landmarks. The dataset contains 51,051 poses obtained from 71 persons in A-Pose while performing 7 movements (walking, running, squatting, and four types of jumping). These poses were scanned to build a collection of 3D moving textured meshes with anatomical correspondence. Each mesh in that collection was used to obtain the 3D locations of 53 anatomical landmarks, and 48 images were created using virtual cameras with different perspectives. 2D pose keypoints from those images were obtained using the MediaPipe Human Pose Landmarker, and their corresponding 3D keypoints were calculated by linear triangulation.The dataset consists of a folder for each participant containing two Track Row Column (TRC) files and one JSON file for each movement sequence. One TRC file is used to store the 3D data of the triangulated 3D keypoints while the other contains the 3D anatomical landmarks. The JSON file is used to store the 2D keypoints and the calibration parameters of the virtual cameras. The anthropometric characteristics of the participants are annotated in a single CSV file.These data are intended to be used in developments that require the transformation of existing human pose solutions in computer vision into biomechanical applications or simulations. This dataset can also be used in other applications related to training neural networks for human motion analysis and studying their influence on anthropometric characteristics.