Sensors (Apr 2022)

State-of-the-Art Capability of Convolutional Neural Networks to Distinguish the Signal in the Ionosphere

  • Yu-Chi Chang,
  • Chia-Hsien Lin,
  • Alexei V. Dmitriev,
  • Mon-Chai Hsieh,
  • Hao-Wei Hsu,
  • Yu-Ciang Lin,
  • Merlin M. Mendoza,
  • Guan-Han Huang,
  • Lung-Chih Tsai,
  • Yung-Hui Li,
  • Enkhtuya Tsogtbaatar

DOI
https://doi.org/10.3390/s22072758
Journal volume & issue
Vol. 22, no. 7
p. 2758

Abstract

Read online

Recovering and distinguishing different ionospheric layers and signals usually requires slow and complicated procedures. In this work, we construct and train five convolutional neural network (CNN) models: DeepLab, fully convolutional DenseNet24 (FC-DenseNet24), deep watershed transform (DWT), Mask R-CNN, and spatial attention-UNet (SA-UNet) for the recovery of ionograms. The performance of the models is evaluated by intersection over union (IoU). We collect and manually label 6131 ionograms, which are acquired from a low-latitude ionosonde in Taiwan. These ionograms are contaminated by strong quasi-static noise, with an average signal-to-noise ratio (SNR) equal to 1.4. Applying the five models to these noisy ionograms, we show that the models can recover useful signals with IoU > 0.6. The highest accuracy is achieved by SA-UNet. For signals with less than 15% of samples in the data set, they can be recovered by Mask R-CNN to some degree (IoU > 0.2). In addition to the number of samples, we identify and examine the effects of three factors: (1) SNR, (2) shape of signal, (3) overlapping of signals on the recovery accuracy of different models. Our results indicate that FC-DenseNet24, DWT, Mask R-CNN and SA-UNet are capable of identifying signals from very noisy ionograms (SNR < 1.4), overlapping signals can be well identified by DWT, Mask R-CNN and SA-UNet, and that more elongated signals are better identified by all models.

Keywords