IEEE Access (Jan 2023)

A Deep Reinforcement Learning-Based Two-Dimensional Resource Allocation Technique for V2I Communications

  • Heetae Jin,
  • Jeongbin Seo,
  • Jeonghun Park,
  • Suk Chan Kim

DOI
https://doi.org/10.1109/ACCESS.2023.3298953
Journal volume & issue
Vol. 11
pp. 78867 – 78878

Abstract

Read online

This paper proposes a two-dimensional resource allocation technique for vehicle-to-infrastructure (V2I) communications. Vehicular communications requires high data rates, low latency, and reliability simultaneously. The 3rd generation partnership project (3GPP) included various numerologies to support this, leading to diversification of transmit time interval (TTI). It enables the two-dimensional resource allocation that considers time and frequency simultaneously, which has yet to be studied much. To tackle this issue, we propose a reinforcement learning approach to solve the two-dimensional resource allocation problem for V2I communications. A reinforcement learning agent in a base station allocates a quality of service (QoS) guaranteed two-dimensional resource block to each vehicle to maximize the sum of achievable data quantity (ADQ). It exploits received power information and a resource occupancy status as input. It outputs vehicles’ allocation information that consists of a time-frequency position, bandwidth, and TTI, which is a solution to the two-dimensional resource allocation. The simulation results show that the proposed method outperforms the fixed allocation method. Because of the ability to pursue ADQ maximization and QoS guarantee, the proposed method performs better than an optimization-based benchmark method if each vehicle has a QoS constraint. Also, we can see that the resource the agent selects according to the QoS constraint varies and maximizes the ADQ.

Keywords