Actuators (Aug 2024)

Synergistic Pushing and Grasping for Enhanced Robotic Manipulation Using Deep Reinforcement Learning

  • Birhanemeskel Alamir Shiferaw,
  • Tayachew F. Agidew,
  • Ali Saeed Alzahrani,
  • Ramasamy Srinivasagan

DOI
https://doi.org/10.3390/act13080316
Journal volume & issue
Vol. 13, no. 8
p. 316

Abstract

Read online

In robotic manipulation, achieving efficient and reliable grasping in cluttered environments remains a significant challenge. This study presents a novel approach that integrates pushing and grasping actions using deep reinforcement learning. The proposed model employs two fully convolutional neural networks—Push-Net and Grasp-Net—that predict pixel-wise Q-values for potential pushing and grasping actions from heightmap images of the scene. The training process utilizes deep Q-learning with a reward structure that incentivizes both successful pushes and grasps, encouraging the robot to create favorable conditions for grasping through strategic pushing actions. Simulation results demonstrate that the proposed model significantly outperforms traditional grasp-only policies, achieving an 87% grasp success rate in cluttered environments, compared to 60% for grasp-only approaches. The model shows robust performance in various challenging scenarios, including well-ordered configurations and novel objects, with completion rates of up to 100% and grasp success rates as high as 95.8%. These findings highlight the model’s ability to generalize to unseen objects and configurations, making it a practical solution for real-world robotic manipulation tasks.

Keywords