IEEE Access (Jan 2020)

Distributed Edge Computing Offloading Algorithm Based on Deep Reinforcement Learning

  • Yunzhao Li,
  • Feng Qi,
  • Zhili Wang,
  • Xiuming Yu,
  • Sujie Shao

DOI
https://doi.org/10.1109/ACCESS.2020.2991773
Journal volume & issue
Vol. 8
pp. 85204 – 85215

Abstract

Read online

As a mode of processing task request, edge computing paradigm can reduce task delay and effectively alleviate network congestion caused by the proliferation of Internet of things(IoT) devices compared with cloud computing. However, in the actual construction of the network, there are various edge autonomous subnets in the adjacent areas, which leads to the possibility of unbalance of server load among autonomous subnets during the peak period of task request. In this paper, a deep reinforcement learning algorithm is proposed to solve the complex computation offloading problem for the heterogeneous Edge Computing Server(ECS) collaborative computing. The problem is solved based on the real-time state of the network and the attributes of the task, which adopts Actor Critic and Policy Gradient's Deep Deterministic Policy Gradient(DDPG) to make optimized decisions of computation offloading. Considering multi-task, the heterogeneity of edge subnet and mobility of edge devices, the proposed algorithm can learn the network environment and generate the computation offloading decision to minimize the task delay.The simulation results show that the proposed DDPG-based algorithm is competitive compared with the Deep Q Network(DQN) algorithm and Asynchronous Advantage Actor-Critic(A3C) algorithm. Moreover, the optimal solutions are leveraged to analyze the influence of edge network parameters on task delay.

Keywords