Remote Sensing (Aug 2023)

Multi-Agent Deep Reinforcement Learning Framework Strategized by Unmanned Aerial Vehicles for Multi-Vessel Full Communication Connection

  • Jiabao Cao,
  • Jinfeng Dou,
  • Jilong Liu,
  • Xuanning Wei,
  • Zhongwen Guo

DOI
https://doi.org/10.3390/rs15164059
Journal volume & issue
Vol. 15, no. 16
p. 4059

Abstract

Read online

In the Internet of Vessels (IoV), it is difficult for any unmanned surface vessel (USV) to work as a coordinator to establish full communication connections (FCCs) among USVs due to the lack of communication connections and the complex natural environment of the sea surface. The existing solutions do not include the employment of some infrastructure to establish USVs’ intragroup FCC while relaying data. To address this issue, considering the high-dimension continuous action space and state space of USVs, we propose a multi-agent deep reinforcement learning framework strategized by unmanned aerial vehicles (UAVs). UAVs can evaluate and navigate the multi-USV cooperation and position adjustment to establish a FCC. When ensuring FCCs, we aim to improve the IoV’s performance by maximizing the USV’s communication range and movement fairness while minimizing their energy consumption, which cannot be explicitly expressed in a closed-form equation. We transform this problem into a partially observable Markov game and design a separate actor–critic structure, in which USVs act as actors and UAVs act as critics to evaluate the actions of USVs and make decisions on their movement. An information transition in UAVs facilitates effective information collection and interaction among USVs. Simulation results demonstrate the superiority of our framework in terms of communication coverage, movement fairness, and average energy consumption, and that it can increase communication efficiency by at least 10% compared to DDPG, with the highest exceeding 120% compared to other baselines.

Keywords