IET Control Theory & Applications (Sep 2023)

Hierarchical multi‐agent reinforcement learning for multi‐aircraft close‐range air combat

  • Wei‐ren Kong,
  • De‐yun Zhou,
  • Yong‐jie Du,
  • Ying Zhou,
  • Yi‐yang Zhao

DOI
https://doi.org/10.1049/cth2.12413
Journal volume & issue
Vol. 17, no. 13
pp. 1840 – 1862

Abstract

Read online

Abstract The close‐range autonomous air combat has gained significant attention from researchers involved in applications related to artificial intelligence (AI). A majority of the previous studies on autonomous air combat were focused on one‐on‐one air combat scenarios, however, the modern air combat is mostly conducted in formations. With regard to the aforementioned factors, a novel hierarchical maneuvering control architecture is introduced that is applied to the multi‐aircraft close‐range air combat scenario, which can handle air combat scenarios with variable‐size formation. Subsequently, three air combat sub‐tasks are designed, and recurrent soft actor‐critic (RSAC) algorithm combined with competitive self‐play (SP) is incorporated to learn the sub‐strategies. A novel hierarchical multi‐agent reinforcement learning (HMARL) algorithm is proposed to obtain the high‐level strategy for target and sub‐strategy selection. The training performance of the training algorithm of sub‐strategies and high‐level strategy in different air combat scenarios is evaluated. The obtained strategies are analyzed and it is found that the formations exhibit effective cooperative behavior in symmetric and asymmetric scenarios. Finally, the ideas of engineering implementation of the maneuvering control architecture are given. The study provides a solution for future multi‐aircraft autonomous air combat.

Keywords