IEEE Access (Jan 2023)

An Optimization Method-Assisted Ensemble Deep Reinforcement Learning Algorithm to Solve Unit Commitment Problems

  • Jingtao Qin,
  • Yuanqi Gao,
  • Mikhail Bragin,
  • Nanpeng Yu

DOI
https://doi.org/10.1109/ACCESS.2023.3313998
Journal volume & issue
Vol. 11
pp. 100125 – 100136

Abstract

Read online

Unit commitment (UC) is a fundamental problem in the day-ahead electricity market, and it is critical to solve UC problems efficiently. Mathematical optimization techniques like dynamic programming, Lagrangian relaxation, and mixed-integer quadratic programming (MIQP) are commonly adopted for UC problems. However, the calculation time of these methods increases at an exponential rate with the number of generators and energy resources, which is still the main bottleneck in the industry. Recent advances in artificial intelligence have demonstrated the capability of reinforcement learning (RL) to solve UC problems. Unfortunately, the existing research on solving UC problems with RL suffers from the curse of dimensionality when the size of UC problems grows. To deal with these problems, we propose an optimization method-assisted ensemble deep reinforcement learning algorithm, where UC problems are formulated as a Markov Decision Process (MDP) and solved by multi-step deep Q-learning in an ensemble framework. The proposed algorithm establishes a candidate action set by solving tailored optimization problems to ensure relatively high performance and the satisfaction of operational constraints. Numerical studies on three test systems show that our algorithm outperforms the baseline RL algorithm in terms of computation efficiency and operation cost. By employing the output of our proposed algorithm as a warm start, the MIQP technique can achieve further reductions in operational costs. Furthermore, the proposed algorithm shows strong generalization capacity under unforeseen operational conditions.

Keywords