Applied Sciences (Dec 2022)

Cooperative Transmission Mechanism Based on Revenue Learning for Vehicular Networks

  • Mingyang Chen,
  • Haixia Cui,
  • Mingsheng Nie,
  • Qiuxian Chen,
  • Shunan Yang,
  • Yongliang Du,
  • Feipeng Dai

DOI
https://doi.org/10.3390/app122412651
Journal volume & issue
Vol. 12, no. 24
p. 12651

Abstract

Read online

With the rapid development of science and technology and the improvement of people’s living standards, vehicles have gradually become the main means of travel. The increase in vehicles has also brought about an increasing incidence of car accidents. In order to reduce traffic accidents, many researchers have proposed the use of vehicular networks to quickly transmit information. As long as these vehicles can receive information from other vehicles or buildings nearby in a timely manner, they can avoid accidents. In vehicular networks, the traditional double connection technique, through interference coordination scheduling strategy based on graph theory, can ensure the fairness of vehicles and obtain suitable neighborhood interference resistance with limited computing resources. However, when a base station transmits data to the vehicular user, the nearby base station and the vehicular network user may be in a state of suspended communication. Thus, the resource utilization of the above double connection vehicular network is not sufficient, resulting in a waste of resources. To solve this issue, this paper presents a study based on earnings learning with a vehicular network multi-point collaborative transmission mechanism, in which the vehicular network users communicate with the surrounding collaborative transmission. We use the Q-learning algorithm in the reinforcement learning process to enable vehicular network users to learn from each other and make cooperative decisions in different environments. In reinforcement learning, the agent makes a decision and changes the state of the environment. Then, the environment feeds back the benefit to the agent through the related algorithm so that the agent gradually learns the optimal decision. Simulation results demonstrate the superiority of our proposed approach with the revenue machine learning model compared with the benchmark schemes.

Keywords