IEEE Access (Jan 2018)

Towards Robust Human-Robot Collaborative Manufacturing: Multimodal Fusion

  • Hongyi Liu,
  • Tongtong Fang,
  • Tianyu Zhou,
  • Lihui Wang

DOI
https://doi.org/10.1109/ACCESS.2018.2884793
Journal volume & issue
Vol. 6
pp. 74762 – 74771

Abstract

Read online

Intuitive and robust multimodal robot control is the key toward human–robot collaboration (HRC) for manufacturing systems. Multimodal robot control methods were introduced in previous studies. The methods allow human operators to control robot intuitively without programming brand-specific code. However, most of the multimodal robot control methods are unreliable because the feature representations are not shared across multiple modalities. To target this problem, a deep learning-based multimodal fusion architecture is proposed in this paper for robust multimodal HRC manufacturing systems. The proposed architecture consists of three modalities: speech command, hand motion, and body motion. Three unimodal models are first trained to extract features, which are further fused for representation sharing. Experiments show that the proposed multimodal fusion model outperforms the three unimodal models. This paper indicates a great potential to apply the proposed multimodal fusion architecture to robust HRC manufacturing systems.

Keywords