IEEE Transactions on Neural Systems and Rehabilitation Engineering (Jan 2023)

Gaze-Based Shared Autonomy Framework With Real-Time Action Primitive Recognition for Robot Manipulators

  • Xiaoyu Wang,
  • Veronica J. Santos

DOI
https://doi.org/10.1109/TNSRE.2023.3328888
Journal volume & issue
Vol. 31
pp. 4306 – 4317

Abstract

Read online

Robots capable of robust, real-time recognition of human intent during manipulation tasks could be used to enhance human-robot collaboration for activities of daily living. Eye gaze-based control interfaces offer a non-invasive way to infer intent and reduce the cognitive burden on operators of complex robots. Eye gaze is traditionally used for “gaze triggering” (GT) in which staring at an object, or sequence of objects, triggers pre-programmed robotic movements. We propose an alternative approach: a neural network-based “action prediction” (AP) mode that extracts gaze-related features to recognize, and often predict, an operator’s intended action primitives. We integrated the AP mode into a shared autonomy framework capable of 3D gaze reconstruction, real-time intent inference, object localization, obstacle avoidance, and dynamic trajectory planning. Using this framework, we conducted a user study to directly compare the performance of the GT and AP modes using traditional subjective performance metrics, such as Likert scales, as well as novel objective performance metrics, such as the delay of recognition. Statistical analyses suggested that the AP mode resulted in more seamless robotic movement than the state-of-the-art GT mode, and that participants generally preferred the AP mode.

Keywords