IEEE Access (Jan 2024)

Data-Transform Multi-Channel Hybrid Deep Learning for Automatic Modulation Recognition

  • Meng Qi,
  • Nianfeng Shi,
  • Guoqiang Wang,
  • Hongxiang Shao

DOI
https://doi.org/10.1109/ACCESS.2024.3393481
Journal volume & issue
Vol. 12
pp. 59113 – 59121

Abstract

Read online

Automatic modulation recognition (AMR) is an essential topic of cognitive radio, which is of great significance for the analysis of wireless signals and is one of the current research hotspots. Traditional AMR approaches predominantly utilize raw in-phase/quadrature symbols (I/Q), amplitude/phase (A/P), or pre-processed data (e.g., high-order cumulates, spectrum images, or constellation diagrams) as inputs for the recognition model. However, it is difficult to achieve superior performance with only a single type of data as input. This paper proposes a novel multi-channel hybrid learning framework that integrates convolutional layers, Long Short-Term Memory (LSTM) layers, fully connected layers and classification layers. The model is built for modeling spatial-temporal correlations from four signal cues (including I/Q signals, A/P signals, I, and Q signals), which aims to explore various differences and leverage the complements from multiple data-form. Two functions employed during the data conversion process further enhance the non-linear representational capacity of the model, thereby boosting the recognition accuracy of the model. Experimental results demonstrate that the proposed framework effectively addresses the classification challenges of QAM16 and QAM64. For the RAML2016A dataset, our model achieves an impressive recognition accuracy of 95% at an SNR of 0 dB. Extensive experiments indicate that the proposed framework outperforms other current networks in terms of recognition accuracy.

Keywords