Tongxin xuebao (Mar 2024)
Graph-to-sequence deep reinforcement learning based complex task deployment strategy in MEC
Abstract
With the help of mobile edge computing (MEC) and network virtualization technology, the mobile terminals can offload the computing, storage, transmission and other resource required for executing various complex applications to the edge service nodes nearby, so as to obtain more efficient service experience.For edge service providers, the optimal energy consumption decision-making problem when deploying complex tasks was comprehensively investigated.Firstly, the problem of deploying complex tasks to multiple edge service nodes was modeled as a mixed integer programming (MIP) model, and then a deep reinforcement learning (DRL) solution strategy that integrated graph to sequence was proposed.Potential dependencies between multiple subtasks through a graph-based encoder design were extracted and learned, thereby automatically discovering common patterns of task deployment based on the available resource status and utilization rate of edge service nodes, and ultimately quickly obtaining the deployment strategy with the optimal energy consumption.Compared with representative benchmark strategies in different network scales, the experimental results show that the proposed strategy is significantly superior to the benchmark strategies in terms of task deployment error ratio, total power consumption of MEC system, and algorithm solving efficiency.