Complex & Intelligent Systems (Aug 2023)

Learning high-level robotic manipulation actions with visual predictive model

  • Anji Ma,
  • Guoyi Chi,
  • Serena Ivaldi,
  • Lipeng Chen

DOI
https://doi.org/10.1007/s40747-023-01174-5
Journal volume & issue
Vol. 10, no. 1
pp. 811 – 823

Abstract

Read online

Abstract Learning visual predictive models has great potential for real-world robot manipulations. Visual predictive models serve as a model of real-world dynamics to comprehend the interactions between the robot and objects. However, prior works in the literature have focused mainly on low-level elementary robot actions, which typically result in lengthy, inefficient, and highly complex robot manipulation. In contrast, humans usually employ top–down thinking of high-level actions rather than bottom–up stacking of low-level ones. To address this limitation, we present a novel formulation for robot manipulation that can be accomplished by pick-and-place, a commonly applied high-level robot action, through grasping. We propose a novel visual predictive model that combines an action decomposer and a video prediction network to learn the intrinsic semantic information of high-level actions. Experiments show that our model can accurately predict the object dynamics (i.e., the object movements under robot manipulation) while trained directly on observations of high-level pick-and-place actions. We also demonstrate that, together with a sampling-based planner, our model achieves a higher success rate using high-level actions on a variety of real robot manipulation tasks.

Keywords