IEEE Access (Jan 2022)

Online Model-Free Reinforcement Learning for Output Feedback Tracking Control of a Class of Discrete-Time Systems With Input Saturation

  • Ahmad Jobran Al-Mahasneh,
  • Sreenatha G. Anavatti,
  • Matthew A. Garratt

DOI
https://doi.org/10.1109/ACCESS.2022.3210136
Journal volume & issue
Vol. 10
pp. 104966 – 104979

Abstract

Read online

In this paper, a new model-free Model-Actor (MA) reinforcement learning controller is developed for output feedback control of a class of discrete-time systems with input saturation constraints. The proposed controller is composed of two neural networks, namely a model-network and an actor network. The model-network is utilized to predict the output of the plant when a certain control action is applied to it. The actor network is utilized to estimate the optimal control action that is required to drive the output to the desired trajectory. The main advantages of the proposed controller over the previously proposed controllers are its ability to control systems in the absence of explicit knowledge of these systems’ dynamics and its ability to start learning from scratch without any offline training. Also, it can explicitly handle the control constraints in the controller design. Comparison results with a previously published reinforcement learning output feedback controller and other controllers confirm the superiority of the proposed controller.

Keywords