EURASIP Journal on Advances in Signal Processing (Apr 2024)

A deep reinforcement approach for computation offloading in MEC dynamic networks

  • Yibiao Fan,
  • Xiaowei Cai

DOI
https://doi.org/10.1186/s13634-024-01142-2
Journal volume & issue
Vol. 2024, no. 1
pp. 1 – 19

Abstract

Read online

Abstract In this study, we investigate the challenges associated with dynamic time slot server selection in mobile edge computing (MEC) systems. This study considers the fluctuating nature of user access at edge servers and the various factors that influence server workload, including offloading policies, offloading ratios, users’ transmission power, and the servers’ reserved capacity. To streamline the process of selecting edge servers with an eye on long-term optimization, we cast the problem as a Markov Decision Process (MDP) and propose a Deep Reinforcement Learning (DRL)-based algorithm as a solution. Our approach involves learning the selection strategy by analyzing the performance of server selections in previous iterations. Simulation outcomes show that our DRL-based algorithm surpasses benchmarks, delivering minimal average latency.

Keywords