IEEE Access (Jan 2024)

Deep Reinforcement Learning-Based Joint Routing and Capacity Optimization in an Aerial and Terrestrial Hybrid Wireless Network

  • Zhe Wang,
  • Hongxiang Li,
  • Eric J. Knoblock,
  • Rafael D. Apaza

DOI
https://doi.org/10.1109/ACCESS.2024.3430191
Journal volume & issue
Vol. 12
pp. 132056 – 132069

Abstract

Read online

As the airspace is experiencing an increasing number of low-altitude aircraft, the concept of spectrum sharing between aerial and terrestrial users emerges as a compelling solution to improve the spectrum utilization efficiency. In this paper, we consider a new Aerial and Terrestrial Hybrid Network (ATHN) comprising aerial vehicles (AVs), ground base stations (BSs), and terrestrial users (TUs). In this ATHN, AVs and BSs collaboratively form a multi-hop ad-hoc network with the objective of minimizing the average end-to-end (E2E) packet transmission delay. Meanwhile, the BSs and TUs form a terrestrial network aimed at maximizing the uplink and downlink sum capacity. Given the concept of spectrum sharing between aerial and terrestrial users in ATHN, we formulate a joint routing and capacity optimization (JRCO) problem, which is a multi-stage combinatorial problem subject to the curse of dimensionality. To address this problem, we propose a Deep Reinforcement Learning (DRL) based algorithm. Specifically, the Dueling Double Deep Q-Network (D3QN) structure is constructed to learn an optimal policy through trial and error. Extensive simulation results demonstrate the efficacy of our proposed solution.

Keywords