Energy Reports (Nov 2022)

ACDRL: An actor–critic deep reinforcement learning approach for solving the energy-aimed train timetable rescheduling problem under random disturbances

  • Jinlin Liao,
  • Guilian Wu,
  • Hao Chen,
  • Shiyuan Ni,
  • Tingting Lin,
  • Lu Tang

Journal volume & issue
Vol. 8
pp. 1350 – 1357

Abstract

Read online

In recent years, large-scale and high-density operations have caused a dramatic increase in the energy consumption of metro systems. For overcrowded metro systems, the original energy-optimized timetable is no longer optimal after unexpected dwell disturbances occur. In this paper, we propose an actor–critic deep reinforcement learning (ACDRL) approach for solving the energy-aimed train schedule rescheduling (ETTR) problem in a real-time and energy-efficient manner. The proposed ACDRL approach can reduce metro systems’ net traction energy consumption by rearranging trains in millisecond time after the occurrence of unpredictable dwell disturbances. The simulation experimental results show that the average response time of ACDRL under unpredictable disturbances is only 0.0009 s and 0.0016 s in two-train and five-train metro systems, which can save energy by 4.73% and 6.95% on average, respectively. This means that a lot of energy can be saved.

Keywords