IEEE Access (Jan 2023)

Countering Evasion Attacks for Smart Grid Reinforcement Learning-Based Detectors

  • Ahmed T. El-Toukhy,
  • Mohamed M. E. A. Mahmoud,
  • Atef H. Bondok,
  • Mostafa M. Fouda,
  • Maazen Alsabaan

DOI
https://doi.org/10.1109/ACCESS.2023.3312376
Journal volume & issue
Vol. 11
pp. 97373 – 97390

Abstract

Read online

Fraudulent customers in smart power grids employ cyber-attacks by manipulating their smart meters and reporting false consumption readings to reduce their bills. To combat these attacks and mitigate financial losses, various machine learning-based electricity theft detectors have been proposed. Unfortunately, these detectors are vulnerable to serious cyber-attacks, specifically evasion attacks. The objective of this paper is to investigate the robustness of deep reinforcement learning (DRL)-based detectors against our proposed evasion attacks through a series of experiments. Firstly, we introduce DRL-based electricity theft detectors implemented using the double deep Q networks (DDQN) algorithm. Secondly, we propose a DRL-based attack model to generate adversarial evasion attacks in a black box attack scenario. These evasion samples are generated by modifying malicious reading samples to deceive the detectors and make them appear as benign samples. We leverage the attractive features of reinforcement learning (RL) to determine optimal actions for modifying the malicious samples. Our DRL-based evasion attack model is compared with an FGSM-based evasion attack model. The experimental results reveal a significant degradation in detector performance due to the DRL-based evasion attack, achieving an attack success rate (ASR) ranging from 92.92% to 99.96%. Thirdly, to counter these attacks and enhance detection robustness, we propose hardened DRL-based defense detectors using an adversarial training process. This process involves retraining the DRL-based detectors on the generated evasion samples. The proposed defense model achieves outstanding detection performance, with a degradation in ASR ranging from 1.80% to 9.20%. Finally, we address the challenge of whether the DRL-based hardened defense model, which has been adversarially trained on DRL-based evasion samples, is capable of defending against FGSM-based evasion samples, and vice versa. We conduct extensive experiments to validate the effectiveness of our proposed attack and defense models.

Keywords