IEEE Access (Jan 2024)

A Car-Following Model Integrating Personalized Driving Style Based on the DER-DDPG Deep Reinforcement Learning Algorithm

  • Chuan Ran,
  • Zhijun Xie,
  • Yuntao Xie,
  • Yang Yin,
  • Hongwu Ye

DOI
https://doi.org/10.1109/ACCESS.2024.3463967
Journal volume & issue
Vol. 12
pp. 136889 – 136906

Abstract

Read online

Decision-making and control play a crucial role in autonomous driving to ensure the safety and efficiency of vehicle operation. With the development of intelligent driving technology, vehicle following, a key component of Intelligent Transport Systems (ITS), has become not only the technological basis for autonomous driving but also a core method for improving road safety and traffic efficiency. To consider different driving habits, this study proposes a personalized vehicle-following model that incorporates driving styles for the needs of personalized vehicle-following decision-making. In vehicle following, the traditional deep deterministic policy gradient (DDPG) algorithm faces problems with low efficiency and low sample utilization in the early training phase. To address this problem, we introduce an enhanced DDPG algorithm, named DER-DDPG, which combines a dual experience replay pool and delayed sampling. The algorithm tailors the reward functions for safety, efficiency, and comfort to different driving styles, and the weights of these functions take into account the diversity of driving styles to achieve a closer-to-human following control strategy. In addition, this study combines the proposed reinforcement learning model with a collision avoidance strategy using the Honda safety distance model with a two-level warning mechanism to effectively mitigate potential safety hazards. Experimental results show that the model achieves efficient and comfortable speed control while ensuring safety. The following success rate of the model is 94.8%, which is 5.9% higher than the baseline method, and the cumulative time is reduced by 69.4%, which is better than that of a traditional human driver. The model improves training efficiency and accuracy while providing customized following control strategies for different driving styles, providing a new research direction and technical support for the development of autonomous driving systems.

Keywords