IEEE Access (Jan 2024)

An Energy-Efficient Dynamic Offloading Algorithm for Edge Computing Based on Deep Reinforcement Learning

  • Keyu Zhu,
  • Shaobo Li,
  • Xingxing Zhang,
  • Jinming Wang,
  • Cankun Xie,
  • Fengbin Wu,
  • Rongxiang Xie

DOI
https://doi.org/10.1109/ACCESS.2024.3452190
Journal volume & issue
Vol. 12
pp. 127489 – 127506

Abstract

Read online

Mobile edge computing (MEC) represents a promising computing paradigm within Artificial Intelligence Generated Content (AIGC), offering users instant, customized, and personalized services. However, with the continued growth of the AIGC user base and the expansion of service demands, edge computing nodes are required to handle an increasing number and complexity of offloading tasks, leading to significant energy consumption issues in edge systems. This paper introduces a distributed task offloading framework (EE-A2C) that utilizes the Advantage Actor-Critic algorithm to enhance energy efficiency in edge cloud environments. This framework facilitates distributed interactions among multiple agents and edge environments with communication queues, aiming to minimize average energy consumption and latency for AIGC users. Secondly, to achieve adaptive and efficient task offloading decisions, we have developed a complex reward-sharing model based on latency and energy consumption. Finally, we have also incorporated LSTM to enhance the model’s ability to capture critical information about energy consumption, thereby promoting more robust decision-making. Compared to eight state-of-the-art energy-saving algorithms, the proposed EE-A2C framework more effectively utilizes the computing power of edge nodes, significantly reduces average energy consumption and latency, and enhances the energy efficiency of the edge cloud system.

Keywords