IEEE Access (Jan 2020)
Recursive State-Value Function: A Method to Reduce the Complexity of Online Computation of Dynamic Programming
Abstract
This paper proposed a method to reduce the computation quantity of dynamic programming to make the time consumption be acceptable for on-line control. The proposed method is the combination of model predictive control and state-value function. This method consist of two parts, the off-line part and the on-line part, where the former part is to generate an approximation of $k$ -step recursive state-value function which represents the cumulative reward from a state in $k$ steps under the optimal control policy, and the latter part is to work out the best action in real time using the $k$ -step recursive state-value function both individually and in combination with MPC. At the end of this paper, some numerical examples are taken to illustrate the effectiveness of our method. Results show that compared to model predictive control and deep Q-learning, our method has some superiority.
Keywords