Electronics (Dec 2024)

Deceptive Maneuvers: Subverting CNN-AdaBoost Model for Energy Theft Detection

  • Santosh Nirmal,
  • Pramod Patil

DOI
https://doi.org/10.53314/ELS2428046N
Journal volume & issue
Vol. 28, no. 2
pp. 46 – 53

Abstract

Read online

As deep learning models become more prevalent in smart grid systems, ensuring their accuracy in tasks like identifying abnormal customer behavior is increasingly important. As its use is increased in smart grids to detect energy theft, crafting adversarial data by attackers to deceive the model to get the desired output is also increased. Evasion attacks (EA) attempt to evade detection by misclassifying input data during testing. The manipulation of data inputs is done so that it is not noticeable to humans but can cause the machine learning (ML) model to produce incorrect results. Electricity theft has become a major problem for utility companies that need to be dealt with effectively. Convolutional Neural Network (CNN) and AdaBoost hybrid model have been developed that promise to detect electricity theft with high accuracy. However, this model is also vulnerable to evasion attacks that can render it ineffective. In this paper, to make the detection system more robust, we present a generative method to create evasion attacks against a hybrid model combining Convolutional Neural Network and Adaboost (CNN-Adaboost). Generated adversarial data from the proposed algorithm is crafted on the model to test its performance. Our proposed attack is validated with State Grid Corporation of China (SGCC) dataset. We test the CNN-Adaboost energy theft detection model and other models’ performance under 5% and 10% evasion attacks. Our findings reveal model performance degradation under our proposed generative evasion attack ranging from 96.35% to 89.23%. With the defence mechanism, we successfully increased adversarial accuracy by up to 97% and decreased the attack success rate (ASR) by up to 3%. These adversaries are useful for designing robust and secure machine learning models, offering an improved solution compared to previous work in this area. We tested the model with varying percentages of adversarial data to analyze its behavior effectively. These adversaries are useful for designing robust and secure ML models. The proposed attack and defence can be utilized to test energy theft detection (ETD) models in industrial and commercial settings.