Complex & Intelligent Systems (Oct 2023)

Data-efficient model-based reinforcement learning with trajectory discrimination

  • Tuo Qu,
  • Fuqing Duan,
  • Junge Zhang,
  • Bo Zhao,
  • Wenzhen Huang

DOI
https://doi.org/10.1007/s40747-023-01247-5
Journal volume & issue
Vol. 10, no. 2
pp. 1927 – 1936

Abstract

Read online

Abstract Deep reinforcement learning has always been used to solve high-dimensional complex sequential decision problems. However, one of the biggest challenges for reinforcement learning is sample efficiency, especially for high-dimensional complex problems. Model-based reinforcement learning can solve the problem with a learned world model, but the performance is limited by the imperfect world model, so it usually has worse approximate performance than model-free reinforcement learning. In this paper, we propose a novel model-based reinforcement learning algorithm called World Model with Trajectory Discrimination (WMTD). We learn the representation of temporal dynamics information by adding a trajectory discriminator to the world model, and then compute the weight of state value estimation based on the trajectory discriminator to optimize the policy. Specifically, we augment the trajectories to generate negative samples and train a trajectory discriminator that shares the feature extractor with the world model. Experimental results demonstrate that our method improves the sample efficiency and achieves state-of-the-art performance on DeepMind control tasks.

Keywords