IEEE Access (Jan 2024)

Computing Resource Allocation Strategy Based on Cloud-Edge Cluster Collaboration in Internet of Vehicles

  • Xianhao Shen,
  • Li Wang,
  • Panfeng Zhang,
  • Xiaolan Xie,
  • Yi Chen,
  • Shaofang Lu

DOI
https://doi.org/10.1109/ACCESS.2023.3349029
Journal volume & issue
Vol. 12
pp. 10790 – 10803

Abstract

Read online

Edge computing plays a crucial role in the field of the Internet of Vehicles (IoV), meeting the resource and latency requirements of time-sensitive vehicle applications. However, the emergence of numerous compute-intensive and latency-sensitive applications, such as augmented reality and autonomous driving, has led to a situation where traditional edge computing architectures cannot meet the increasing application demands of the IoV. This paper extends the paradigm of vehicular edge computing to a collaborative cloud-edge cluster resource provisioning framework. Integrating compute resources from multiple Edge Service Providers (ESP) and the cloud enables horizontal and vertical collaborative computation offloading among service nodes. To facilitate resource sharing among different ESPs, we introduce a dynamic pricing model and utilize software-defined networking (SDN) to tackle this scenario’s complex resource management challenges. Furthermore, with the optimization objectives of minimizing task computation latency and maximizing the profits of ESPs, we establish a mathematical model. Before resource allocation, we employ a clustering algorithm to determine initial offloading decisions, reducing the dimensionality of the action space. Subsequently, we employ the Double Deep Q-Network (DDQN) algorithm to achieve a rational allocation of compute resources. Simulation results demonstrate that compared to the Deep Q-Network (DQN) algorithm and greedy strategy, the proposed approach reduces latency by 18.18% and 34.85%, respectively, while increasing the profits of edge service providers by 16.25% and 33.33%, respectively.

Keywords