Journal of Advanced Mechanical Design, Systems, and Manufacturing (Aug 2024)

Gaze cue: which body parts will human take as cue to infer a robot’s intention?

  • Liheng YANG,
  • Yoshihiro SEJIMA,
  • Tomio WATANABE

DOI
https://doi.org/10.1299/jamdsm.2024jamdsm0060
Journal volume & issue
Vol. 18, no. 5
pp. JAMDSM0060 – JAMDSM0060

Abstract

Read online

In human-human communication, humans observe each other’s behavioral actions, and infer each other’s internal states such as intentions and emotions to build relationships. In order to establish relationships between humans and robots, robots need to recognize human actions and infer human intentions as well as to indicate robot’s own internal states to be perceived by humans. The studies in robots’ internal states indications are based on the idea of giving robots understandable human-like characteristics that allow human to infer the internal states and encourage the anthropomorphizing in interaction. However, it lacks a basic exploration from the cognitive perspective that how humans interpret robots body languages, and from what kind of bodily movements humans perceive and infer robots’ internal state. In this paper, we designed and developed a CG character to investigate what cues humans take in intention inference. The CG character was of independent and moveable head, eyes, and right arm. The intention inference task was as simple as predicting which cup the character would grasp. We further analyzed the participants’ gaze points in the experiment obtained by an eye tracking device. The results suggested that the CG character’s eyes (gaze) have the strong impact in intention inference, and that humans tend to gaze at CG character’s eyes and take the gaze of the CG character as the cue.

Keywords