IEEE Open Journal of Vehicular Technology (Jan 2024)

Multi-Agent Deep Reinforcement Learning Based Optimizing Joint 3D Trajectories and Phase Shifts in RIS-Assisted UAV-Enabled Wireless Communications

  • Belayneh Abebe Tesfaw,
  • Rong-Terng Juang,
  • Hsin-Piao Lin,
  • Getaneh Berie Tarekegn,
  • Wendenda Nathanael Kabore

DOI
https://doi.org/10.1109/OJVT.2024.3486197
Journal volume & issue
Vol. 5
pp. 1712 – 1726

Abstract

Read online

Unmanned aerial vehicles (UAVs) serve as airborne access points or base stations, delivering network services to the Internet of Things devices (IoTDs) in areas with compromised or absent infrastructure. However, urban obstacles like trees and high buildings can obstruct the connection between UAVs and IoTDs, leading to degraded communication performance. High altitudes can also result in significant path losses. To address these challenges, this paper introduces the deployment of reconfigurable intelligent surfaces (RISs) that smartly reflect signals to improve communication quality. It proposes a method to jointly optimize the 3D trajectory of the UAV and the phase shifts of the RIS to maximize communication coverage and ensure satisfactory average achievable data rates for RIS-assisted UAV-enabled wireless communications by considering mobile multi-user scenarios. In this paper, a multi-agent double-deep Q-network (MADDQN) algorithm is presented, which each agent dynamically adjusts either the positioning of the UAV or the phase shifts of the RIS. Agents learn to collaborate with each other by sharing the same reward to achieve a common goal. In the simulation, results demonstrate that the proposed method significantly outperforms baseline strategies in terms of improving communication coverage and average achievable data rates. The proposed method achieves 98.6% of a communication coverage score, while IoTDs are guaranteed to have acceptable achievable data rates.

Keywords