IEEE Access (Jan 2019)

Optimistic Sampling Strategy for Data-Efficient Reinforcement Learning

  • Dongfang Zhao,
  • Jiafeng Liu,
  • Rui Wu,
  • Dansong Cheng,
  • Xianglong Tang

DOI
https://doi.org/10.1109/ACCESS.2019.2913001
Journal volume & issue
Vol. 7
pp. 55763 – 55769

Abstract

Read online

A high required number of interactions with the environment is one of the most important problems in reinforcement learning (RL). To deal with this problem, several data-efficient RL algorithms have been proposed and successfully applied in practice. Unlike previous research, that focuses on optimal policy evaluation and policy improvement stages, we actively select informative samples by leveraging entropy-based optimal sampling strategy, which takes the initial samples set into consideration. During the initial sampling process, information entropy is used to describe the potential samples. The agent selects the most informative samples using an optimization method. This way, the initial sample is more informative than in random and fixed strategy. Therefore, a more accurate initial dynamic model and policy can be learned. Thus, the proposed optimal sampling method guides the agent to search in a more informative region. The experimental results on standard benchmark problems involving a pendulum, cart pole, and cart double pendulum show that our optimal sampling strategy has a better performance in terms of data efficiency.

Keywords