Journal of Applied Science and Engineering (Nov 2022)

Cooperative Output Regulation By Q-learning For Discrete Multi-agent Systems In Finite-time

  • Wenjun Wei,
  • Jingyuan Tang

DOI
https://doi.org/10.6180/jase.202306_26(6).0011
Journal volume & issue
Vol. 26, no. 6
pp. 853 – 864

Abstract

Read online

This article studies the output regulation of discrete-time multi-agent systems with an unknown model by a finite-time optimal control algorithm based on Q-learning that uses the method of the linear quadratic regulator (LQR). The algorithm uses the Bellman optimality principle to deduce the Q-function under global optimality. It obtains the distributed optimal control law that minimizes the value of Q-function by policy iteration. Through local communication among agents, the optimal global control of each agent’s output can be realized without relying on the dynamic model of the system. Secondly, by designing a novel finite-time local error formula, the output regulation synchronization time is reduced by 50%. Finally, a MATLAB simulation example shows the capability of the nominated algorithm.

Keywords