IEEE Access (Jan 2023)

Novel Evasion Attacks Against Adversarial Training Defense for Smart Grid Federated Learning

  • Atef H. Bondok,
  • Mohamed Mahmoud,
  • Mahmoud M. Badr,
  • Mostafa M. Fouda,
  • Mohamed Abdallah,
  • Maazen Alsabaan

DOI
https://doi.org/10.1109/ACCESS.2023.3323617
Journal volume & issue
Vol. 11
pp. 112953 – 112972

Abstract

Read online

In the advanced metering infrastructure (AMI) of the smart grid, smart meters (SMs) are deployed to collect fine-grained electricity consumption data, enabling billing, load monitoring, and efficient energy management. However, some consumers engage in fraudulent behavior by hacking their meters, leading to either traditional electricity theft or more sophisticated evasion attacks (EAs). EAs aim to illegally reduce electricity bills while deceiving theft detection mechanisms. The current methods for identifying such attacks raise privacy concerns due to the need for access to consumers’ detailed consumption data to train detection mechanisms. To address privacy concerns, federated learning (FL) is proposed as a collaborative training approach across multiple consumers. Adversarial training (AT) has shown promise in countering evasion threats on machine learning models. This paper, first, investigates the susceptibility of traditional electricity theft classifiers trained by FL to EAs for both independent and identically distributed (IID) and Non-IID consumption data. Then, it investigates the effectiveness of AT in securing the global electricity theft detector against EAs, assuming no misbehavior from the participant consumers in the FL process. After that, we introduce three novel attacks, namely Distillation, No-Adversarial-Sample-Training, and False-Labeling, which can be launched during the AT process to make the global model susceptible to evasion at inference time. Finally, extensive experiments are conducted to validate the severity of these proposed attacks. Our findings reveal that the AT can counter EAs effectively when the FL participants are honest, but it fails when they act maliciously and launch our attacks. This work lays the foundation for future endeavors in exploring additional countermeasures, in conjunction with AT, to bolster the security and resilience of FL machine learning models against adversarial attacks in the context of electricity theft detection.

Keywords