Journal of Cloud Computing: Advances, Systems and Applications (Jun 2021)

Computation offloading strategy based on deep reinforcement learning for connected and autonomous vehicle in vehicular edge computing

  • Bing Lin,
  • Kai Lin,
  • Changhang Lin,
  • Yu Lu,
  • Ziqing Huang,
  • Xinwei Chen

DOI
https://doi.org/10.1186/s13677-021-00246-6
Journal volume & issue
Vol. 10, no. 1
pp. 1 – 17

Abstract

Read online

Abstract Connected and Automated Vehicle (CAV) is a transformative technology that has great potential to improve urban traffic and driving safety. Electric Vehicle (EV) is becoming the key subject of next-generation CAVs by virtue of its advantages in energy saving. Due to the limited endurance and computing capacity of EVs, it is challenging to meet the surging demand for computing-intensive and delay-sensitive in-vehicle intelligent applications. Therefore, computation offloading has been employed to extend a single vehicle’s computing capacity. Although various offloading strategies have been proposed to achieve good computing performace in the Vehicular Edge Computing (VEC) environment, it remains challenging to jointly optimize the offloading failure rate and the total energy consumption of the offloading process. To address this challenge, in this paper, we establish a computation offloading model based on Markov Decision Process (MDP), taking into consideration task dependencies, vehicle mobility, and different computing resources for task offloading. We then design a computation offloading strategy based on deep reinforcement learning, and leverage the Deep Q-Network based on Simulated Annealing (SA-DQN) algorithm to optimize the joint objectives. Experimental results show that the proposed strategy effectively reduces the offloading failure rate and the total energy consumption for application offloading.

Keywords