Applied Sciences (Nov 2020)

Train Scheduling with Deep Q-Network: A Feasibility Test

  • Intaek Gong,
  • Sukmun Oh,
  • Yunhong Min

DOI
https://doi.org/10.3390/app10238367
Journal volume & issue
Vol. 10, no. 23
p. 8367

Abstract

Read online

We consider a train scheduling problem in which both local and express trains are to be scheduled. In this type of train scheduling problem, the key decision is determining the overtaking stations at which express trains overtake their preceding local trains. This problem has been successfully modeled via mixed integer programming (MIP) models. One of the obvious limitation of MIP-based approaches is the lack of freedom to the choices objective and constraint functions. In this paper, as an alternative, we propose an approach based on reinforcement learning. We first decompose the problem into subproblems in which a single express train and its preceding local trains are considered. We, then, formulate the subproblem as a Markov decision process (MDP). Instead of solving each instance of MDP, we train a deep neural network, called deep Q-network (DQN), which approximates Q-value function of any instances of MDP. The learned DQN can be used to make decision by choosing the action which corresponds to the maximum Q-value. The advantage of the proposed method is the ability to incorporate any complex objective and/or constraint functions. We demonstrate the performance of the proposed method by numerical experiments.

Keywords