Atmosphere (Feb 2023)

A Hybrid Deep Learning Model for Air Quality Prediction Based on the Time–Frequency Domain Relationship

  • Rui Xu,
  • Deke Wang,
  • Jian Li,
  • Hang Wan,
  • Shiming Shen,
  • Xin Guo

DOI
https://doi.org/10.3390/atmos14020405
Journal volume & issue
Vol. 14, no. 2
p. 405

Abstract

Read online

Deep learning models have been widely used in time-series numerical prediction of atmospheric environmental quality. The fundamental feature of this application is to discover the correlation between influencing factors and target parameters through a deep network structure. These relationships in original data are affected by several different frequency factors. If the deep network is adopted without guidance, these correlations may be masked by entangled multifrequency data, which will cause the problem of insufficient correlation feature extraction and difficult model interpretation. Because the wavelet transform has the ability to separate these entangled multifrequency data, and these correlations can be extracted by deep learning methods, a hybrid model combining wavelet transform and transformer-like (WTformer) was designed to extract time–frequency domain features and prediction of air quality. The 2018–2021 hourly data in Guilin was used as the benchmark training dataset. Pollutants and meteorological variables in the local dataset are decomposed into five frequency bands by wavelet. The analysis of the WTformer model showed that particulate matter (PM2.5 and PM10) had an obvious correlation in the low-frequency band and a low correlation in the high-frequency band. PM2.5 and temperature had a negative correlation in the high-frequency band and an obvious positive correlation in the low-frequency band. PM2.5 and wind speed had a low correlation in the high-frequency band and an obvious negative correlation in the low-frequency band. These results showed that the laws of variables in the time–frequency domain could be found by the model, which made it possible to explain the model. The experimental results show that the prediction performance of the established model was better than that of multilayer perceptron (MLP), one-dimensional convolutional neural network (1D-CNN), gate recurrent unit (GRU), long short-term memory (LSTM) and Transformer, in all time steps (1, 4, 8, 24 and 48 h).

Keywords