IEEE Access (Jan 2021)

Faking Signals to Fool Deep Neural Networks in AMC via Few Data Points

  • Hongbin Ma,
  • Shuyuan Yang,
  • Guangjun He,
  • Ruowu Wu,
  • Xiaojun Hao,
  • Tingpeng Li,
  • Zhixi Feng

DOI
https://doi.org/10.1109/ACCESS.2021.3106704
Journal volume & issue
Vol. 9
pp. 124425 – 124433

Abstract

Read online

The recent years has witnessed a rapid development of Deep Learning (DL) based Automation Modulation Classification (AMC) methods, which has proved to outperform traditional classification approaches. In order to disturb the deep neural networks for AMC, in this paper, we propose an adversarial attack method to generate fake signals for fooling DL-based classifiers. Firstly, some constraints on visual difference and recoverability of fake signals are defined. Next, a Few Data Point Attacker (FDPA) is proposed to generate fake signals with few perturbed data points via differential evolution algorithm. Some experiments are taken on a public dataset, RML 2016.10a, and the results show that fake signals generated by the FDPA can remarkably reduce the accuracies of three types of DL-based AMC classifiers, a Convolutional Neural Network (CNN) based classifier, a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) based classifier, and a classifier combined with CNN and LSTM-RNN. The code will be available.

Keywords