IET Communications (Aug 2024)
A knowledge distillation strategy for enhancing the adversarial robustness of lightweight automatic modulation classification models
Abstract
Abstract Automatic modulation classification models based on deep learning models are at risk of being interfered by adversarial attacks. In an adversarial attack, the attacker causes the classification model to misclassify the received signal by adding carefully crafted adversarial interference to the transmitted signal. Based on the requirements of efficient computing and edge deployment, a lightweight automatic modulation classification model is proposed. Considering that the lightweight automatic modulation classification model is more susceptible to interference from adversarial attacks and that adversarial training of the lightweight auto‐modulation classification model fails to achieve the desired results, an adversarial attack defense system for the lightweight automatic modulation classification model is further proposed, which can enhance the robustness when subjected to adversarial attacks. The defense method aims to transfer the adversarial robustness from a trained large automatic modulation classification model to a lightweight model through the technique of adversarial robust distillation. The proposed method exhibits better adversarial robustness than current defense techniques in feature fusion based automatic modulation classification models in white box attack scenarios.
Keywords