IEEE Open Journal of the Communications Society (Jan 2024)

Task-Oriented Satellite-UAV Networks With Mobile-Edge Computing

  • Peng Wei,
  • Wei Feng,
  • Yunfei Chen,
  • Ning Ge,
  • Wei Xiang,
  • Shiwen Mao

DOI
https://doi.org/10.1109/OJCOMS.2023.3341251
Journal volume & issue
Vol. 5
pp. 202 – 220

Abstract

Read online

Networked robots have become crucial for unmanned applications since they can collaborate to complete complex tasks in remote/hazardous/depopulated areas. Due to the cost inefficiency of deploying cellular network infrastructure in these areas, hybrid satellite-UAV networks emerge as a promising solution. These networks provide seamless and on-demand connectivity for multiple robots with various task requirements, and support computation-intensive and latency-sensitive services through mobile edge computing (MEC)-based offloading. However, to complete tasks in limited times, the rapid collective movement of mobile robots may cause frequent service migration, and a large number of gathered robots may compete for limited bandwidth resources in satellite and UAV communications. As a result, offloading latency may increase significantly. To address this issue, the average completion time of multi-robot offloading in task-oriented satellite-UAV networks with MEC is formulated as an optimization problem. Unlike conventional mobility-aware MEC-based offloading schemes, joint optimization of mobility control, data offloading, and resource allocation is proposed using velocity control of multiple robots. According to Lyapunov optimization, the original optimization problem is simplified into minimizing the average completion time of offloading for all robots within UAV and satellite coverage. A multi-agent $Q$ -learning algorithm, including multi-group dual-agent $Q$ -learning, is proposed based on local state observation and global reward calculation. In each dual-agent $Q$ -learning, one agent is responsible for velocity control and communication resource allocation, while the other is responsible for data offloading and computational resource allocation. The convergence of the proposed multi-agent $Q$ -learning algorithm is also theoretically analyzed. Simulation results show that the proposed scheme can effectively reduce the offloading latency by up to 35% in the multi-robot environment over its conventional counterparts.

Keywords