IEEE Access (Jan 2024)

Optimization of Peer-to-Peer Energy Trading With a Model-Based Deep Reinforcement Learning in a Non-Sharing Information Scenario

  • Nat Uthayansuthi,
  • Peerapon Vateekul

DOI
https://doi.org/10.1109/ACCESS.2024.3442445
Journal volume & issue
Vol. 12
pp. 111021 – 111034

Abstract

Read online

In the realm of sustainable energy distribution, peer-to-peer (P2P) trading within microgrids has emerged as a promising solution, fostering decentralization and efficiency. While previous studies focused on optimizing P2P trading, they often relied on impractical assumptions regarding private information sharing among prosumers. To overcome this limitation, we aim to optimize P2P energy trading within the microgrid based on a realistic assumption (not sharing information), using our proposed model-based multi-agent deep reinforcement learning model. Firstly, our framework integrates long short-term memory (LSTM) for the policy model. Secondly, our model-based framework is based on temporal fusion transformers (TFT) for 24h-ahead net load consumption. Thirdly, the global horizontal index (GHI) is added to the model. Finally, a clustering technique helps to segment a large number of households into small household groups. The experiment was conducted on the Ausgrid dataset, consisting of 300 households in Sydney, Australia. Results demonstrate that our model achieved 4.20% and 3.95% lower microgrid electricity costs than MADDPG and A3C3, the sharing-info-based models. Moreover, it shows 12.48% lower costs than directly trading energy with the utility grid.

Keywords