IET Communications (Nov 2024)

Resource allocation scheduling scheme for task migration and offloading in 6G Cybertwin internet of vehicles based on DRL

  • Rui Wei,
  • Tuanfa Qin,
  • Jinbao Huang,
  • Ying Yang,
  • Junyu Ren,
  • Lei Yang

DOI
https://doi.org/10.1049/cmu2.12826
Journal volume & issue
Vol. 18, no. 18
pp. 1244 – 1265

Abstract

Read online

Abstract As vehicular technology advances, intelligent vehicles generate numerous computation‐intensive tasks, challenging the computational resources of both the vehicles and the Internet of Vehicles (IoV). Traditional IoV struggles with fixed network structures and limited scalability, unable to meet the growing computational demands and next‐generation mobile communication technologies. In congested areas, near‐end Mobile Edge Computing (MEC) resources are often overtaxed, while far‐end MEC servers are underused, resulting in poor service quality. A novel network framework utilizing sixth‐generation mobile communication (6G) and digital twin technologies, combined with task migration, promises to alleviate these inefficiencies. To address these challenges, a task migration and re‐offloading model based on task attribute classification is introduced, employing a hybrid deep reinforcement learning (DRL) algorithm—Dueling Double Q Network DDPG (QDPG). This algorithm merges the strengths of the Deep Deterministic Policy Gradient (DDPG) and the Dueling Double Deep Q‐Network (D3QN), effectively handling continuous and discrete action domains to optimize task migration and re‐offloading in IoV. The inclusion of the Mini Batch K‐Means algorithm enhances learning efficiency and optimization in the DRL algorithm. Experimental results show that QDPG significantly boosts task efficiency and computational performance, providing a robust solution for resource allocation in IoV.

Keywords