IEEE Access (Jan 2021)

Object Permanence Through Audio-Visual Representations

  • Fanjun Bu,
  • Chien-Ming Huang

DOI
https://doi.org/10.1109/ACCESS.2021.3115082
Journal volume & issue
Vol. 9
pp. 131574 – 131582

Abstract

Read online

As robots perform manipulation tasks and interact with objects, it is probable that they accidentally drop objects (e.g., due to an inadequate grasp of an unfamiliar object) that subsequently bounce out of their visual fields. To enable robots to recover from such errors, we draw upon the concept of object permanence—objects remain in existence even when they are not being sensed (e.g., seen) directly. In particular, we developed a multimodal neural network model—using a partial, observed bounce trajectory and the audio resulting from drop impact as its inputs—to predict the full bounce trajectory and the end location of a dropped object. We empirically show that: 1) our multimodal method predicted end locations close in proximity (i.e., within the visual field of the robot’s wrist camera) to the actual locations and 2) the robot was able to retrieve dropped objects by applying minimal vision-based pick-up adjustments. Additionally, we show that our method outperformed five comparison baselines in retrieving dropped objects. Our results contribute to enabling object permanence for robots and error recovery from object drops.

Keywords