Robotics (Mar 2023)

Grasping Complex-Shaped and Thin Objects Using a Generative Grasping Convolutional Neural Network

  • Jaeseok Kim,
  • Olivia Nocentini,
  • Muhammad Zain Bashir,
  • Filippo Cavallo

DOI
https://doi.org/10.3390/robotics12020041
Journal volume & issue
Vol. 12, no. 2
p. 41

Abstract

Read online

Vision-based pose detection and grasping complex-shaped and thin objects are challenging tasks. We propose an architecture that integrates the Generative Grasping Convolutional Neural Network (GG-CNN) with depth recognition to identify a suitable grasp pose. First, we construct a training dataset with data augmentation to train a GG-CNN with only RGB images. Then, we extract a segment of the tool using a color segmentation method and use it to calculate an average depth. Additionally, we apply and evaluate different encoder–decoder models with a GG-CNN structure using the Intersection Over Union (IOU). Finally, we validate the proposed architecture by performing real-world grasping and pick-and-place experiments. Our framework achieves a success rate of over 85.6% for picking and placing seen surgical tools and 90% for unseen surgical tools. We collected a dataset of surgical tools and validated their pick and place with different GG-CNN architectures. In the future, we aim to expand the dataset of surgical tools and improve the accuracy of the GG-CNN.

Keywords