Applied Sciences (Feb 2020)

Real–Sim–Real Transfer for Real-World Robot Control Policy Learning with Deep Reinforcement Learning

  • Naijun Liu,
  • Yinghao Cai,
  • Tao Lu,
  • Rui Wang,
  • Shuo Wang

DOI
https://doi.org/10.3390/app10051555
Journal volume & issue
Vol. 10, no. 5
p. 1555

Abstract

Read online

Compared to traditional data-driven learning methods, recently developed deep reinforcement learning (DRL) approaches can be employed to train robot agents to obtain control policies with appealing performance. However, learning control policies for real-world robots through DRL is costly and cumbersome. A promising alternative is to train policies in simulated environments and transfer the learned policies to real-world scenarios. Unfortunately, due to the reality gap between simulated and real-world environments, the policies learned in simulated environments often cannot be generalized well to the real world. Bridging the reality gap is still a challenging problem. In this paper, we propose a novel real−sim−real (RSR) transfer method that includes a real-to-sim training phase and a sim-to-real inference phase. In the real-to-sim training phase, a task-relevant simulated environment is constructed based on semantic information of the real-world scenario and coordinate transformation, and then a policy is trained with the DRL method in the built simulated environment. In the sim-to-real inference phase, the learned policy is directly applied to control the robot in real-world scenarios without any real-world data. Experimental results in two different robot control tasks show that the proposed RSR method can train skill policies with high generalization performance and significantly low training costs.

Keywords