Alexandria Engineering Journal (Dec 2024)
Reinforcement learning-based computation offloading in edge computing: Principles, methods, challenges
Abstract
With the rapid development of mobile communication technologies and Internet of Things (IoT) devices, Multi-Access Edge Computing (MEC) has become one of the most potential technologies for wireless communication. In MEC systems, faster and more reliable data processing can be provided to IoT devices through computation offloading, but edge servers have limited computing and storage resources. The prerequisite for whether an IoT device can offload a computation task to an edge server for processing is whether the edge server has enough remaining available resources and whether the edge server caches the services related to the task, followed by finding the best way to offload the task. Therefore, to process tasks efficiently, offloading decisions, resource allocation, and edge caching need to be jointly considered during offloading tasks to edge servers. Reinforcement Learning (RL) has recently emerged as a key technique for solving the computation offloading problem in MEC, and a large number of optimization methods have emerged. In this context, we provide a comprehensive survey of RL-based computation offloading fundamental principles and theories in MEC, including mechanisms for finding optimal offloading decisions, methods for joint resource allocation, and means for joint edge caching. In addition, we also discuss the challenges and future work of RL-based computation offloading methods.