e-Prime: Advances in Electrical Engineering, Electronics and Energy (Mar 2025)
Adversarial measurements for convolutional neural network-based energy theft detection model in smart grid
Abstract
Electricity theft has become a major problem worldwide and is a significant headache for utility companies. It not only results in revenue loss but also disrupts the quality of electricity, increases generation costs, and raises overall electricity prices. Electricity or Energy theft detection (ETD) systems utilizing machine learning, particularly those employing neural networks, have high accuracy and have become popular in literature, achieving higher detection performance. Recent studies reveal that machine learning and deep learning models are vulnerable. Day by day, different attack techniques are coming up in different fields, including energy, financial, etc. As the use of machine learning for energy theft detection has grown, it has become important to explore its weaknesses. Research has shown that most of the ETD models are vulnerable to evasion attacks (EA). Its goal is to reduce electricity costs by deceiving the model into recognizing a fraudulent customer as legitimate.In this paper, four different experiments are conducted in which we check the performance of Convolutional Neural Network and adaboost (CNN-Adaboost) ETD system. Then, we design an evasion attack to assess the model's performance under attack. The attack comprises two methods: the first is we originally propose a novel Adversarial Data Generation Method (ADGM), which is an algorithm designed to generate adversarial data, and the other is Fast Gradient Sign Method (FGSM). In the third scenario, test the attack success rate on different percentages of malicious consumers. Finally, the performance of CNN-Adaboost and other state-of-the-art methods is tested and compared using 10 % and 20 % adversarial data. Our proposed attack is validated with State Grid Corporation of China (SGCC) dataset.ADGM and FGSM attack models generate adversarial evasion attack samples by modifying the benign sample along with already available malicious data. These samples are transferred to the surrogate model in order to test how efficiently it works on malicious data, and we forward only those data that successfully deceive the surrogate model. The CNN_Adaboost ETD model's overall performance significantly decreased for both methods. The accuracy reduced up to 53.61 % from 96.3 % for ADGM and 63.42 % for FGSM and the transferability rates are 95.82 % and 90.68 % for ADGM and FGSM, respectively. Our findings reveal that the attack success rate (ASR) of ADGM is 94.11 % which is better than FGSM. It is also observed that as the percentage of adversarial data increased, the accuracy of the models decreased. The accuracy of CNN-Adaboost, initially 96.3 %, decreased to 85.45 % and 79.43 % for 10 % and 20 % adversarial data, respectively. These adversaries are transferable and are useful for designing robust and secure machine learning (ML) models.