IEEE Access (Jan 2024)
MARVEL: Bringing Multi-Agent Reinforcement- Learning Based Variable Speed Limit Controllers Closer to Deployment
Abstract
Variable Speed Limit (VSL) is a promising highway traffic management strategy deployed worldwide. Most algorithms deployed operate based on a set of predefined rules which hinders their ability to perform optimally across diverse traffic conditions. Recent studies applying multi-agent reinforcement learning (MARL) techniques to VSL control problems have shown promising results. However, none of the existing MARL frameworks comply with real-world requirements imposed by most traffic agencies across the United States. In this work, we propose MARVEL (Multi-Agent Reinforcement-learning for large-scale Variable spEed Limits), a novel MARL framework for large-scale VSL highway control with real-world deployment capabilities. MARVEL only relies on sensing information available in most real-world settings and learns through a reward structure that incorporates three abstract goals to ensure reasonable behaviors under different traffic conditions. Through parameter sharing among all VSL agents, the proposed framework scales to cover corridors equipped with multiple VSL gantries. The policies are trained in a microscopic traffic simulation environment, focusing on a short freeway stretch with 8 VSL agents spanning 7 miles. For testing, these policies are applied to a more extensive network with 34 VSL agents spanning 17 miles of Interstate 24 (I-24) near Nashville, Tennessee, the United States. MARVEL improves traffic safety by 63.4% compared to the no-control scenario and enhances traffic mobility by 58.6% compared to a state-of-the-practice algorithm that has been deployed on I-24. Finally, we test the response of the policy learned in simulation with real-world data collected from I-24 and illustrate its deployment capability.
Keywords