Heliyon (Mar 2024)

Attention-Based Models for Multivariate Time Series Forecasting: Multi-step Solar Irradiation Prediction

  • Sadman Sakib,
  • Mahin K. Mahadi,
  • Samiur R. Abir,
  • Al-Muzadded Moon,
  • Ahmad Shafiullah,
  • Sanjida Ali,
  • Fahim Faisal,
  • Mirza M. Nishat

Journal volume & issue
Vol. 10, no. 6
p. e27795

Abstract

Read online

Bangladesh's subtropical climate with an abundance of sunlight throughout the greater portion of the year results in increased effectiveness of solar panels. Solar irradiance forecasting is an essential aspect of grid-connected photovoltaic systems to efficiently manage solar power's variation and uncertainty and to assist in balancing power supply and demand. This is why it is essential to forecast solar irradiation accurately. Many meteorological factors influence solar irradiation, which has a high degree of fluctuation and uncertainty. Predicting solar irradiance multiple steps ahead makes it difficult for forecasting models to capture long-term sequential relationships. Attention-based models are widely used in the field of Natural Language Processing for their ability to learn long-term dependencies within sequential data. In this paper, our aim is to present an attention-based model framework for multivariate time series forecasting. Using data from two different locations in Bangladesh with a resolution of 30 min, the Attention-based encoder-decoder, Transformer, and Temporal Fusion Transformer (TFT) models are trained and tested to predict over 24 steps ahead and compared with other forecasting models. According to our findings, adding the attention mechanism significantly increased prediction accuracy and TFT has shown to be more precise than the rest of the algorithms in terms of accuracy and robustness. The obtained mean square error (MSE), the mean absolute error (MAE), and the coefficient of determination (R2) values for TFT are 0.151, 0.212, and 0.815, respectively. In comparison to the benchmark and sequential models (including the Naive, MLP, and Encoder-Decoder models), TFT has a reduction in the MSE and MAE of 8.4–47.9% and 6.1–22.3%, respectively, while R2 is raised by 2.13–26.16%. The ability to incorporate long-distance dependency increases the predictive power of attention models.

Keywords