Energy Conversion and Economics (Jun 2023)

Highly transferable adversarial attack against deep‐reinforcement‐learning‐based frequency control

  • Zhongwei Li,
  • Yang Liu,
  • Peng Qiu,
  • Hongyan Yin,
  • Xu Wan,
  • Mingyang Sun

DOI
https://doi.org/10.1049/enc2.12086
Journal volume & issue
Vol. 4, no. 3
pp. 202 – 212

Abstract

Read online

Abstract With the increase in inverter‐based renewable energy resources, the complexity and uncertainty of low‐carbon power systems have increased significantly. Deep reinforcement learning (DRL)–based approaches have been extensively studied for frequency control to overcome the limitations of traditional model‐based approaches. The goal of DRL‐based methods for primary frequency control is to minimise load shedding while satisfying frequency safety requirements, thereby reducing control costs. However, the vulnerabilities of DRL models pose new security threats to power systems. These threats have not been identified and addressed in the existing literature. Therefore, in this paper, a series of vulnerability assessment methods are proposed for DRL‐based frequency control with a focus on the under‐frequency load shedding (UFLS) problem. Three adversarial sample production methods are designed with different optimisation directions: Q‐value‐based FGSM (Q‐FGSM), action‐based JSMA (A‐JSMA), and state‐action‐based CW (SA‐CW). Furthermore, combining the advantages of the above three attack methods, a hybrid adversarial attack algorithm is designed, Q‐value‐state‐action‐based mix (QSA‐MIX), to significantly affect the decision process of the DRL model. In case studies of the IEEE39 bus system, the proposed attack methods had a severe impact on system operation and control. In particular, the high attack transferability of the proposed attack algorithms in a black‐box setting provides further evidence that the vulnerability of current DRL‐based control schemes is prevalent.

Keywords