IEEE Access (Jan 2024)

QUB-PHEO: A Visual-Based Dyadic Multi-View Dataset for Intention Inference in Collaborative Assembly

  • Samuel Adebayo,
  • Sean McLoone,
  • Joost C. Dessing

DOI
https://doi.org/10.1109/ACCESS.2024.3485162
Journal volume & issue
Vol. 12
pp. 157050 – 157066

Abstract

Read online

QUB-PHEO introduces a visual-based, dyadic dataset with the potential of advancing human-robot interaction (HRI) research in assembly operations and intention inference. This dataset captures rich multimodal interactions between two participants, one acting as a ‘robot surrogate,’ across a variety of assembly tasks that are further broken down into 36 distinct subtasks. With rich visual annotations—facial landmarks, gaze, hand movements, object localization, and more—for 70 participants, QUB-PHEO offers two versions: full video data for 50 participants and visual cues for all 70. Designed to improve machine learning models for HRI, QUB-PHEO enables deeper analysis of subtle interaction cues and intentions, promising contributions to the field. The dataset is available at https://github.com/exponentialR/QUB-PHEO subject to an End-User License Agreement (EULA).

Keywords