IEEE Access (Jan 2020)

Batch Prioritization in Multigoal Reinforcement Learning

  • Luiz Felipe Vecchietti,
  • Taeyoung Kim,
  • Kyujin Choi,
  • Junhee Hong,
  • Dongsoo Har

DOI
https://doi.org/10.1109/ACCESS.2020.3012204
Journal volume & issue
Vol. 8
pp. 137449 – 137461

Abstract

Read online

In multigoal reinforcement learning, an agent interacts with an environment and learns to achieve multiple goals. The goal-conditioned policy is trained to effectively generalize its behavior for multiple goals. During training, the experiences collected by the agent are randomly sampled from a replay buffer. Because biased sampling of achieved goals affects the success rate of a given task, it should be avoided by considering the valid goal space, introduced here as the set of goals to achieve, and the current competence of the policy. To this end, a novel prioritization method for creation of batches, e.g., collections of samples, is proposed. Candidate batches are sampled and associated with costs; in each iteration the batch with the minimum cost is chosen to train the policy. The cost function is modeled by an intended goal, which is proposed as a hypothetical goal that the policy is trying to learn in each cycle, and the information of the valid goal space. The minimum cost of the batch selected for each iteration decreases throughout training as the policy learns to achieve goals near the center of the valid goal space. The proposed batch prioritization method is combined with hindsight experience replay (HER) for experiments in robotic control tasks presented in the OpenAI gym suite to demonstrate learning performance comparable to that of other state-of-the-art prioritization methods. As a result, the proposed batch prioritization method can achieve improved learning performance in 4 out of 5 tasks, particularly for harder tasks. The experimental results suggest that the proposed method for the creation of training batches, using the valid goal space information and current competence of the policy, can enhance learning performance in multigoal tasks with high-dimensional goal space.

Keywords