IEEE Access (Jan 2024)

Research on Explainability Methods for Unmanned Combat Decision-Making Models

  • Wenlin Chen,
  • Shuai Wang,
  • Chong Jiang,
  • Siyu Wang,
  • Lina Hao

DOI
https://doi.org/10.1109/ACCESS.2024.3409616
Journal volume & issue
Vol. 12
pp. 83502 – 83512

Abstract

Read online

This paper proposes an unmanned combat decision-making algorithm based on PPO and expert systems. The experimental results show that the algorithm has good decision-making ability. A strategy optimization method based on a self-encoding neural network is proposed, which greatly improves the effective decision-making rate of the original algorithm. In view of the opaque problem of the unmanned combat decision-making model obtained by the above deep reinforcement learning algorithm, a local interpretability algorithm GLIME based on Generative Adversarial Network (GAN) and Local interpretable model-agnostic explanations (LIME) is proposed, which improves the stability of the LIME algorithm. Finally, combined with the global interpretability algorithm, Permutation Feature Importance (PFI), the decision-making samples are analyzed from both local and global perspectives, providing comprehensive and stable explanations for the decision-making algorithm, thereby improving the transparency of the decision-making algorithm.

Keywords