Scientific Reports (Jul 2024)
Adaptive task migration strategy with delay risk control and reinforcement learning for emergency monitoring
Abstract
Abstract The timely and reliable handling of post-disaster emergency monitoring tasks is crucial for effective rescue operations. UAV-assisted edge computing plays a pivotal role in the rapid deployment of such systems. However, challenges persist due to communication and computation resource bottlenecks when dealing with delay-sensitive monitoring tasks. In dynamic post-disaster environments, effective task scheduling and resource allocation decisions directly impact the system’s ability to process tasks. Therefore, this paper proposes an adaptive task migration decision-making system for emergency monitoring tasks in UAV-assisted edge computing. Firstly, We decomposed the optimization objectives based on the task processing workflow, then devised a stepwise delay risk control and resource recovery mechanism based on early discarding. Secondly, by integrating multi-agent reinforcement learning (MARL), optimal strategies for task offloading, UAV queue scheduling, and communication resource allocation are learned to enhance the decision system’s environmental awareness and maximize the successful completion of emergency monitoring tasks. Simulation experiments demonstrate that the algorithm significantly improves the success rate of migration tasks and data processing capacity, thereby validating its convergence and effectiveness.