IEEE Access (Jan 2023)

Vision-Based In-Hand Manipulation for Variously Shaped and Sized Objects by a Robotic Gripper With Active Surfaces

  • Yuzuka Isobe,
  • Sunhwi Kang,
  • Takeshi Shimamoto,
  • Yoshinari Matsuyama,
  • Sarthak Pathak,
  • Kazunori Umeda

DOI
https://doi.org/10.1109/ACCESS.2023.3331012
Journal volume & issue
Vol. 11
pp. 127317 – 127333

Abstract

Read online

In-hand manipulation to translate and rotate an object is a challenging problem for robotic hands. As one solution, robotic hand with belts around fingers ( $active~surfaces$ ) has been developed for continuous and seamless manipulation. However, in practice, the grasped object can only be rotated through a small range less than 90° except the objects with simple shapes like cubes and cylinders. This is because the fingers cannot follow the width required not to drop the object or the desired rotation cannot be produced depending on its shape, leading to dropping the object or unable to rotate it anymore. This paper presents a method to address these problems and rotate objects of various shape and sizes through a large range of motion. A stereo camera is attached to a two-fingered robotic hand with belts. The changes in the contact points between the surfaces of the belts and object are predicted. Based on the prediction, the belts are controlled to adjust the angular velocity of the object such that the fingers can follow the width required to grasp it and the appropriate rotation can be produced. The fingers are controlled to follow the predicted contact points and deflect the belts to both cancel the unwanted rotation and generate the desired rotation. Through experiments in which 32 objects of 16 shapes and 2 sizes, and other real-world objects were rotated to 1 revolution, the rotational ranges for various objects were larger than in the other studies, confirming the validity of the proposed method.

Keywords