IEEE Access (Jan 2023)

Defending AI-Based Automatic Modulation Recognition Models Against Adversarial Attacks

  • Haolin Tang,
  • Ferhat Ozgur Catak,
  • Murat Kuzlu,
  • Evren Catak,
  • Yanxiao Zhao

DOI
https://doi.org/10.1109/ACCESS.2023.3296805
Journal volume & issue
Vol. 11
pp. 76629 – 76637

Abstract

Read online

Automatic Modulation Recognition (AMR) is one of the critical steps in the signal processing chain of wireless networks, which can significantly improve communication performance. AMR detects the modulation scheme of the received signal without any prior information. Recently, many Artificial Intelligence (AI) based AMR methods have been proposed, inspired by the considerable progress of AI methods in various fields. On the one hand, AI-based AMR methods can outperform traditional methods in terms of accuracy and efficiency. On the other hand, they are susceptible to new types of cyberattacks, such as model poisoning or adversarial attacks. This paper explores the vulnerabilities of an AI-based AMR model to adversarial attacks in both single-input-single-output and multiple-input-multiple-output scenarios. We show that these attacks can significantly reduce the classification performance of the AI-based AMR model, which highlights the security and robustness concerns. Therefore, we propose a widely used mitigation method (i.e., defensive distillation) to reduce the vulnerabilities of the model against adversarial attacks. The simulation results indicate that the AI-based AMR model can be highly vulnerable to adversarial attacks, but their vulnerabilities can be significantly reduced by using mitigation methods.

Keywords