IEEE Access (Jan 2020)

Reinforcement-Learning-Based Energy Storage System Operation Strategies to Manage Wind Power Forecast Uncertainty

  • Eunsung Oh,
  • Hanho Wang

DOI
https://doi.org/10.1109/ACCESS.2020.2968841
Journal volume & issue
Vol. 8
pp. 20965 – 20976

Abstract

Read online

Currently, renewable-energy-based power generation is rapidly developing to tackle climate change; however, the use of renewable energy is limited owing to the uncertainty related to renewable energy sources. In particular, energy storage systems (ESSs), which are critical for implementing wind power generation (WPG), entail a wide uncertainty range. Herein, a reinforcement leaning (RL)-based ESS operation strategy is investigated for managing the WPG forecast uncertainty. First, a WPG forecast uncertainty minimization problem is formulated with respect to the ESS operation, subject to ESS constraints, and then, the problem is presented as a Markov decision process (MDP) model, with the state-action space limited by the ESS characteristics. To achieve the optimal solution of the MDP model, an expected state–action–reward–state–action (SARSA) method, which is robust toward the dispersion of the system environment, is employed. Further, frequency-domain data screening based on the k-mean clustering method is implemented to improve learning performance by reducing the variance of the WPG forecast uncertainty. Extensive simulations are conducted based on practical WPG generation data and forecasting. Results indicate that the proposed clustered RL-based ESS operation strategy can manage the WPG forecast uncertainty more effectively than conventional Q-learning-based methods; additionally, the proposed method demonstrates a near-optimal performance within a 1%-point analysis gap to the optimal solution, which requires complete information, including future values.

Keywords