IEEE Access (Jan 2022)

APER-DDQN: UAV Precise Airdrop Method Based on Deep Reinforcement Learning

  • Yan Ouyang,
  • Xinqing Wang,
  • Ruizhe Hu,
  • Honghui Xu

DOI
https://doi.org/10.1109/ACCESS.2022.3174105
Journal volume & issue
Vol. 10
pp. 50878 – 50891

Abstract

Read online

Accuracy is the most critical factor that affects the effect of unmanned aerial vehicle (UAV) airdrop. The method to improve the accuracy of UAV airdrop based on traditional modeling has some limitations such as complex modeling, multiple model parameters and difficulty in considering all kinds of factors comprehensively when facing complex realistic environment. In order to solve the problem of UAV precision airdrop more conveniently, this paper introduces the deep reinforcement learning method and proposes an Adaptive Priority Experience Replay Deep Double Q-Network (APER-DDQN) algorithm based on Deep Double Q-Network (DDQN). This method introduces the priority experience replay mechanism based on DDQN, and adopts adaptive discount rate and learning rate to improve the decision-making performance and stability of the algorithm. Furthermore, this paper designs and builds a simulation experimental platform for algorithm training and testing. The experimental results show that our APER-DDQN has good performance and can more effectively solve the problem of UAV accurate airdrop while avoiding the complex modeling process. Firstly, in the training stage, compared with DDQN and Deep Q Network (DQN), APER-DDQN has faster convergence speed, higher reward and more stable performance. Then, in the test phase, compared with relying on human experience, our method shows higher average reward (average 3.01) and success rate (average 41%), and our method also has more advantages in performance compared with DDQN and DQN. Finally, extended experiments verify the generalization ability of APER-DDQN to different environments.

Keywords