IET Communications (Apr 2023)

Probability density function based data augmentation for deep neural network automatic modulation classification with limited training data

  • Chongzheng Hao,
  • Xiaoyu Dang,
  • Xiangbin Yu,
  • Sai Li,
  • Chenghua Wang

DOI
https://doi.org/10.1049/cmu2.12588
Journal volume & issue
Vol. 17, no. 7
pp. 852 – 862

Abstract

Read online

Abstract Deep neural networks (DNN) based automatic modulation classification (AMC) has achieved high accuracy performance. However, DNNs are data‐hungry models, and training such a model requires a large volume of data. Insufficient training data will cause DNN models to experience overfitting and severe performance degradation. In practical AMC tasks, training the deep model with sufficient data is challenging due to the costly data collection. To this end, a novel probability density function (PDF) based data augmentation scheme and a method to determine the required minimum sampling size for data enlargement is proposed. Compared with the known image‐based augmentation scheme, the proposed waveform‐based PDF technique has low complexity and is easy to implement. Experimental results show that the required size of the training dataset is one order of magnitude smaller than the sufficient dataset in the additive white Gaussian noise channel, and effective recognition can be achieved using around 60% of the total examples under the Rayleigh channel. Moreover, the presented scheme can expand training data under frequency and phase offsets.

Keywords