IEEE Access (Jan 2021)

Deep Reinforcement Learning for Task Offloading in Edge Computing Assisted Power IoT

  • Jiangyi Hu,
  • Yang Li,
  • Gaofeng Zhao,
  • Bo Xu,
  • Yiyang Ni,
  • Haitao Zhao

DOI
https://doi.org/10.1109/ACCESS.2021.3092381
Journal volume & issue
Vol. 9
pp. 93892 – 93901

Abstract

Read online

Power Internet of Things (PIoT) is a promising solution to meet the increasing electricity demand of modern cities, but real-time processing and analysis of huge data collected by the devices is challengeable due to limited computing capability of devices and long distance from the cloud center. In this paper, we consider the edge computing assisted PIoT where the computing tasks of the devices can be either processed locally by the devices, or offloaded to edge servers. Aiming to maximize the long-term system utility which is defined as a weighted sum of reduction in latency and energy consumption, we propose a novel task offloading algorithm based on deep reinforcement learning, which jointly optimizes task scheduling, transmit power of the PIoT devices, and computing resource allocation of the edge servers. Specifically, the task execution on each edge server is modeled as a queuing system, in which the current queue state may affect the waiting time for the next tasks. The transmit power and computing resource allocation is first optimized, respectively, and then a deep Q-learning algorithm is adopted to make task scheduling decisions. Numerical results show that the proposed method can improve the system utility.

Keywords