Drones (Sep 2024)

Achieving Robust Learning Outcomes in Autonomous Driving with DynamicNoise Integration in Deep Reinforcement Learning

  • Haotian Shi,
  • Jiale Chen,
  • Feijun Zhang,
  • Mingyang Liu,
  • Mengjie Zhou

DOI
https://doi.org/10.3390/drones8090470
Journal volume & issue
Vol. 8, no. 9
p. 470

Abstract

Read online

The advancement of autonomous driving technology is becoming increasingly vital in the modern technological landscape, where it promises notable enhancements in safety, efficiency, traffic management, and energy use. Despite these benefits, conventional deep reinforcement learning algorithms often struggle to effectively navigate complex driving environments. To tackle this challenge, we propose a novel network called DynamicNoise, which was designed to significantly boost the algorithmic performance by introducing noise into the deep Q-network (DQN) and double deep Q-network (DDQN). Drawing inspiration from the NoiseNet architecture, DynamicNoise uses stochastic perturbations to improve the exploration capabilities of these models, thus leading to more robust learning outcomes. Our experiments demonstrated a 57.25% improvement in the navigation effectiveness within a 2D experimental setting. Moreover, by integrating noise into the action selection and fully connected layers of the soft actor–critic (SAC) model in the more complex 3D CARLA simulation environment, our approach achieved an 18.9% performance gain, which substantially surpassed the traditional methods. These results confirmed that the DynamicNoise network significantly enhanced the performance of autonomous driving systems across various simulated environments, regardless of their dimensionality and complexity, by improving their exploration capabilities rather than just their efficiency.

Keywords