IEEE Access (Jan 2024)
Advanced Decision Making and Motion Planning Framework for Autonomous Navigation in Unsignalized Intersections
Abstract
Autonomous vehicle navigation relies heavily on integrating sensor data, perception, path planning, localization, and vehicle control. However, existing research often treats these components in isolation, particularly at unsignalized intersections, leading to suboptimal performance and safety concerns. This paper presents the Advanced Hierarchical Reinforcement Learning (AHRL) framework, which addresses the challenge of inefficient coordination between decision-making and motion planning in dynamic environments. By integrating Deep Q-Network (DQN) for high-level decision-making with Model Predictive Control (MPC) for precise motion planning, the AHRL framework ensures smoother, safer navigation in complex scenarios like unsignalized intersections. These scenarios require vehicles to interpret right-of-way rules, predict other vehicles’ behavior, and navigate conflicting traffic, with risks heightened by sudden speed changes, lane shifts, and shared space negotiations. In our simulations of a 4-way unsignalized intersection with dynamic traffic flows, the AHRL framework demonstrated substantial improvements over baseline models, achieving a 56 % reduction in jerks, a 13.39 % reduction in collisions, and a 22.9 % reduction in safety gap violations per episode. These results underscore the framework’s advanced Decision-Making and Motion Planning (DM2P) capabilities, showcasing its potential for safer and more efficient autonomous vehicle navigation in complex traffic scenarios. Integrating DQN and MPC within the AHRL framework addresses the inter-dependencies between decision-making and motion planning, improving overall performance and reliability.
Keywords