Applied Sciences (Dec 2022)

Computation Offloading and Trajectory Control for UAV-Assisted Edge Computing Using Deep Reinforcement Learning

  • Huamei Qi,
  • Zheng Zhou

DOI
https://doi.org/10.3390/app122412870
Journal volume & issue
Vol. 12, no. 24
p. 12870

Abstract

Read online

Task offloading has attracted widespread attention in accelerating applications and reducing energy consumption. However, in areas with surging traffic (nucleic acid testing, concerts, etc.), the limited resources of fixed-base stations cannot meet user requirements. Unmanned aerial vehicles (UAVs) can effectively serve as temporary-base stations or aerial access points for mobile devices (MDs). In the UAV-assisted MEC system, we intend to jointly optimize the trajectory and user association to maximize computational efficiency. This problem is a non-convex fractional problem; therefore, it is not feasible to use only a traditional method, such as Dinkelbach’s method, for solving a fractional problem. Therefore, to facilitate online decision making for this joint optimization problem, we introduce deep reinforcement learning (DRL) and propose a double-layer cycle algorithm for maximizing computation efficiency (DCMCE). Specifically, in the outer loop, we model the trajectory planning problem as a Markov decision process, and use deep reinforcement learning to output the best trajectory. In the inner loop, we use Dinkelbach’s method to simplify the fraction problem, and propose a priority function to optimize user association to maximize computational efficiency. Simulation results show that DCMCE achieves higher computational efficiency than the baseline scheme.

Keywords