Jisuanji kexue yu tansuo (Jun 2020)

Review of Model-Based Reinforcement Learning

  • ZHAO Tingting, KONG Le, HAN Yajie, REN Dehua, CHEN Yarui

DOI
https://doi.org/10.3778/j.issn.1673-9418.1912040
Journal volume & issue
Vol. 14, no. 6
pp. 918 – 927

Abstract

Read online

Deep reinforcement learning (DRL) as an important learning paradigm in the field of machine learning, has received increasing attentions after AlphaGo defeats the human. DRL interacts with the environment by trials and errors, and obtains the optimal policy by maximizing the cumulative reward. Reinforcement learning can be divided into two categories: model-free reinforcement learning and model-based reinforcement learning. The tra-ining process of model-free reinforcement learning needs a large number of samples. It is difficult for model-free reinforcement learning to get good performance when the sampling budget is limited, and a large number of samples cannot be collected. However, model-based reinforcement learning can reduce the real sample demand and improve the data efficiency through making full use of the environment model. This paper focuses on the field of model-based reinforcement learning, introduces its research status, investigates its classical algorithms, and discusses?future development trend and application prospect.

Keywords