Applied Sciences (Oct 2022)
Sim–Real Mapping of an Image-Based Robot Arm Controller Using Deep Reinforcement Learning
Abstract
Models trained with Deep Reinforcement learning (DRL) have been deployed in various areas of robotics with varying degree of success. To overcome the limitations of data gathering in the real world, DRL training utilizes simulated environments and transfers the learned policy to real-world scenarios, i.e., sim–real transfer. Simulators fail to accurately capture the entire dynamics of the real world, so simulation-trained policies often fail when applied to reality, termed a reality gap (RG). In this paper, we propose a search (mapping) algorithm that takes in real-world observation (images) and maps them to the policy-equivalent images in the simulated environment using a convolution neural network (CNN) model. The two-step training process, DRL policy and a mapping model, overcomes the RG problem with simulated data only. We evaluated the proposed system with a gripping task of a custom-made robot arm in the real world and compared the performance against a conventional DRL sim–real transfer system. The conventional system achieved a 15–57% success rate in gripping operation depending on the position of the target object while the mapping-based sim–real system achieved 100%. The experimental results demonstrated that the proposed DRL with mapping method appropriately corresponded the real world to the simulated environment, confirming that the scheme can achieve high sim–real generalization at significantly low training costs.
Keywords