IEEE Open Journal of the Communications Society (Jan 2021)

<italic>Learning to Fly</italic>: A Distributed Deep Reinforcement Learning Framework for Software-Defined UAV Network Control

  • Hai Cheng,
  • Lorenzo Bertizzolo,
  • Salvatore D'oro,
  • John Buczek,
  • Tommaso Melodia,
  • Elizabeth Serena Bentley

DOI
https://doi.org/10.1109/OJCOMS.2021.3092690
Journal volume & issue
Vol. 2
pp. 1486 – 1504

Abstract

Read online

Control and performance optimization of wireless networks of Unmanned Aerial Vehicles (UAVs) require scalable approaches that go beyond architectures based on centralized network controllers. At the same time, the performance of model-based optimization approaches is often limited by the accuracy of the approximations and relaxations necessary to solve the UAV network control problem through convex optimization or similar techniques, and by the accuracy of the channel network models used. To address these challenges, this article introduces a new architectural framework to control and optimize UAV networks based on Deep Reinforcement Learning (DRL). Furthermore, it proposes a virtualized, ‘ready-to-fly’ emulation environment to generate the extensive wireless data traces necessary to train DRL algorithms, which are notoriously hard to generate and collect on battery-powered UAV networks. The training environment integrates previously developed wireless protocol stacks for UAVs into the CORE/EMANE emulation tool. Our ‘ready-to-fly’ virtual environment guarantees scalable collection of high-fidelity wireless traces that can be used to train DRL agents. The proposed DRL architecture enables distributed data-driven optimization (with up to 3.7 $\times$ throughput improvement and 0.2 $\times$ latency reduction in reported experiments), facilitates network reconfiguration, and provides a scalable solution for large UAV networks.

Keywords