Journal of King Saud University: Computer and Information Sciences (Jul 2022)

HLifeRL: A hierarchical lifelong reinforcement learning framework

  • Fan Ding,
  • Fei Zhu

Journal volume & issue
Vol. 34, no. 7
pp. 4312 – 4321

Abstract

Read online

Deep reinforcement learning research in a single-task environment has made remarkable achievements. However, it is often plagued by catastrophic forgetting, prohibitively low sample efficiency and lack of scalability problems when facing multi-task environment. To solve these issues, a Hierarchical Lifelong Reinforcement Learning framework (HLifeRL) is proposed to enhance the ability of agents to deal with a sequence of tasks in the way of skill discovery (we treat option as low-level skill in this paper) and hierarchical policy. HLifeRL can automatically extract task-related knowledge without any human intervention or priori knowledge. Moreover, with the help of a scalable library and the master policy, we can flexibly combine various skills to complete multiple tasks in the form of call-and-return. The experimental results show that HLifeRL can accelerate the speed of single-task training and deliver remarkable stability along with scalability in a lifelong setting environment.

Keywords