IEEE Access (Jan 2024)
Approximating Energy Market Clearing and Bidding With Model-Based Reinforcement Learning
Abstract
Energy market rules should incentivize market participants to behave in a market and grid conform way. However, they can also provide incentives for undesired and unexpected strategies if the market design is flawed. MARL is a promising new approach to predicting the expected profit-maximizing behavior of energy market participants in simulation. However, reinforcement learning requires many interactions with the system to converge, and the power system environment often consists of extensive computations, e.g., optimal power flow (OPF) calculation for market clearing. To tackle this complexity, we provide a model of the energy market to a basic multi-agent reinforcement learning (MARL) algorithm in the form of a learned OPF approximation and explicit market rules. The learned OPF surrogate model makes an explicit solving of the OPF completely unnecessary. Our experiments demonstrate that the model additionally reduces training time by about one order of magnitude but at the cost of a slightly worse performance. Potential applications of our method are market design, more realistic modeling of market participants, and analysis of manipulative behavior. Source code available at https://github.com/Digitalized-Energy-Systems/marl_clearing_and_bidding.
Keywords