IEEE Access (Jan 2023)

Overcoming Obstacles With a Reconfigurable Robot Using Deep Reinforcement Learning Based on a Mechanical Work-Energy Reward Function

  • Or Simhon,
  • Zohar Karni,
  • Sigal Berman,
  • David Zarrouk

DOI
https://doi.org/10.1109/ACCESS.2023.3274675
Journal volume & issue
Vol. 11
pp. 47681 – 47689

Abstract

Read online

This paper presents a Deep Reinforcement Learning (DRL) method based on a mechanical (work) Energy reward function applied to a reconfigurable RSTAR robot to overcome obstacles. The RSTAR is a crawling robot that can reconfigure its shape and shift the location of its center of mass via a sprawl and a four-bar extension mechanism. The DRL was applied in a simulated environment with a physical engine (UNITY $^{\mathrm {TM}}$ ). The robot was trained on a step obstacle and a two-stage narrow passage obstacle composed of a horizontal and a vertical channel. To evaluate the benefits of the proposed Energy reward function, it was compared to time-based and movement-based reward functions. The results showed that the Energy-based reward produced superior results in terms of obstacle height, energy requirements, and time to overcome the obstacle. The Energy-based reward method also converged faster to the solution compared to the other reward methods. The DRL’s results for all the methods (energy, time and movement- based rewards) were superior to the best results produced by the human experts (see attached video).

Keywords