Mathematical Biosciences and Engineering (Feb 2024)

A two-stage grasp detection method for sequential robotic grasping in stacking scenarios

  • Jing Zhang ,
  • Baoqun Yin,
  • Yu Zhong,
  • Qiang Wei,
  • Jia Zhao,
  • Hazrat Bilal

DOI
https://doi.org/10.3934/mbe.2024152
Journal volume & issue
Vol. 21, no. 2
pp. 3448 – 3472

Abstract

Read online

Dexterous grasping is essential for the fine manipulation tasks of intelligent robots; however, its application in stacking scenarios remains a challenge. In this study, we aimed to propose a two-phase approach for grasp detection of sequential robotic grasping, specifically for application in stacking scenarios. In the initial phase, a rotated-YOLOv3 (R-YOLOv3) model was designed to efficiently detect the category and position of the top-layer object, facilitating the detection of stacked objects. Subsequently, a stacked scenario dataset with only the top-level objects annotated was built for training and testing the R-YOLOv3 network. In the next phase, a G-ResNet50 model was developed to enhance grasping accuracy by finding the most suitable pose for grasping the uppermost object in various stacking scenarios. Ultimately, a robot was directed to successfully execute the task of sequentially grasping the stacked objects. The proposed methodology demonstrated the average grasping prediction success rate of 96.60% as observed in the Cornell grasping dataset. The results of the 280 real-world grasping experiments, conducted in stacked scenarios, revealed that the robot achieved a maximum grasping success rate of 95.00%, with an average handling grasping success rate of 83.93%. The experimental findings demonstrated the efficacy and competitiveness of the proposed approach in successfully executing grasping tasks within complex multi-object stacked environments.

Keywords