IEEE Access (Jan 2020)

A Multi-Critic Reinforcement Learning Method: An Application to Multi-Tank Water Systems

  • Juan Martinez-Piazuelo,
  • Daniel E. Ochoa,
  • Nicanor Quijano,
  • Luis Felipe Giraldo

DOI
https://doi.org/10.1109/ACCESS.2020.3025194
Journal volume & issue
Vol. 8
pp. 173227 – 173238

Abstract

Read online

This paper investigates the combination of reinforcement learning and neural networks applied to the data-driven control of dynamical systems. In particular, we propose a multi-critic actor-critic architecture that eases the value function learning task by distributing it into multiple neural networks. We also propose a filtered multi-critic approach that offers further performance improvements as it eases the training process of the control policy. All the studied methods are evaluated with several numerical experiments on multi-tank water systems with nonlinear coupled dynamics, where control is known to be a challenging task. The simulation results show that the proposed multi-critic scheme is able to outperform the standard actor-critic approach in terms of speed and sensitivity of the learning process. Moreover, the results show that the filtered multi-critic strategy outperforms the unfiltered one under these same terms. This document highlights the benefits of the multi-critic methodology on a state of the art reinforcement learning algorithm, the deep deterministic policy gradient, and demonstrates its application to multi-tank water systems relevant for industrial process control.

Keywords