Advances in Mechanical Engineering (Dec 2022)
Obstacle avoidance planning of autonomous vehicles using deep reinforcement learning
Abstract
Obstacle avoidance path planning in a dynamic circumstance is one of the fundamental problems of autonomous vehicles, counting optional maneuvers: emergency braking and active steering. This paper proposes emergency obstacle avoidance planning based on deep reinforcement learning (DRL), considering safety and comfort. Firstly, the vehicle emergency braking and lane change processes are analyzed in detail. A graded hazard index is defined to indicate the degree of the potential risk of the current vehicle movement. The longitudinal distance and lateral waypoint models are established, including the comfort deceleration and stability coefficient considerations. Simultaneously, a fuzzy PID controller is installed to track to satisfy the stability and feasibility of the path. Then, this paper proposes a DRL process to determine the obstacle avoidance plan. Mainly, multi-reward functions are designed for different collisions, corresponding penalties for longitudinal rear-end collisions, and lane-changing side collisions based on the safety distance, comfort reward, and safety reward. Apply a special DRL method-DQN to release the planning program. The difference is that the long and short-term memory (LSTM) layer is utilized to solve incomplete observations and improve the efficiency and stability of the algorithm in a dynamic environment. Once the policy is practiced, the vehicle can automatically perform the best obstacle avoidance maneuver in an emergency, improving driving safety. Finally, this paper builds a simulated environment in CARLA and is trained to evaluate the effectiveness of the proposed algorithm. The collision rate, safety distance difference, and total reward index indicate that the collision avoidance path is generated safely, and the lateral acceleration and longitudinal velocity satisfy the comfort requirements. Besides, the method proposed in this paper is compared with traditional DRL, which proves the beneficial performance in safety and efficiency.