IEEE Access (Jan 2023)

Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications

  • S. M. Asiful Huda,
  • Sangman Moh

DOI
https://doi.org/10.1109/ACCESS.2023.3292938
Journal volume & issue
Vol. 11
pp. 68269 – 68285

Abstract

Read online

The rapid development of the Internet of Things and wireless communication has resulted in the emergence of many latency-constrained and computation-intensive applications such as surveillance, virtual reality, and disaster monitoring. To satisfy the computational demand and reduce the prolonged transmission delay to the cloud, mobile edge computing (MEC) has evolved as a potential candidate that can improve task completion efficiency in a reliable fashion. Owing to its high mobile nature and ease of use, as promising candidates, unmanned aerial vehicles (UAVs) can be incorporated with MEC to support such computation-intensive and latency-critical applications. However, determining the ideal offloading decision for the UAV on basis of the task characteristics still remains a crucial challenge. In this paper, we investigate a surveillance application scenario of a hierarchical UAV swarm that includes an UAV-enabled MEC with a team of UAVs surveilling the area to be monitored. To determine the optimal offloading policy, we propose a deep reinforcement learning based computation offloading (DRLCO) scheme using double deep Q-learning, which minimizes the weighted sum cost by jointly considering task execution delay and energy consumption. A performance study shows that the proposed DRLCO technique significantly outperforms conventional schemes in terms of offloading cost, energy consumption, and task execution delay. The better convergence and effectiveness of the proposed method over conventional schemes are also demonstrated.

Keywords