Journal of Robotics (Jan 2020)

Visual Navigation with Asynchronous Proximal Policy Optimization in Artificial Agents

  • Fanyu Zeng,
  • Chen Wang

DOI
https://doi.org/10.1155/2020/8702962
Journal volume & issue
Vol. 2020

Abstract

Read online

Vanilla policy gradient methods suffer from high variance, leading to unstable policies during training, where the policy’s performance fluctuates drastically between iterations. To address this issue, we analyze the policy optimization process of the navigation method based on deep reinforcement learning (DRL) that uses asynchronous gradient descent for optimization. A variant navigation (asynchronous proximal policy optimization navigation, appoNav) is presented that can guarantee the policy monotonic improvement during the process of policy optimization. Our experiments are tested in DeepMind Lab, and the experimental results show that the artificial agents with appoNav perform better than the compared algorithm.