Results in Engineering (Mar 2023)

Forecasting energy consumption demand of customers in smart grid using Temporal Fusion Transformer (TFT)

  • Amril Nazir,
  • Abdul Khalique Shaikh,
  • Abdul Salam Shah,
  • Ashraf Khalil

Journal volume & issue
Vol. 17
p. 100888

Abstract

Read online

Energy consumption prediction has always remained a concern for researchers because of the rapid growth of the human population and customers joining smart grids network for smart home facilities. Recently, the spread of COVID-19 has dramatically increased energy consumption in the residential sector. Hence, it is essential to produce energy per the residential customers' requirements, improve economic efficiency, and reduce production costs. The previously published papers in the literature have considered the overall energy consumption prediction, making it difficult for production companies to produce energy per customers' future demand. Using the proposed study, production companies can accurately have energy per their customers' needs by forecasting future energy consumption demands.Scientists and researchers are trying to minimize energy consumption by applying different optimization and prediction techniques; hence this study proposed a daily, weekly, and monthly energy consumption prediction model using Temporal Fusion Transformer (TFT). This study relies on a TFT model for energy forecasting, which considers both primary and valuable data sources and batch training techniques. The model's performance has been related to the Long Short-Term Memory (LSTM), LSTM interpretable, and Temporal Convolutional Network (TCN) models. The model's performance has remained better than the other algorithms, with mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE) of 4.09, 2.02, and 1.50. Further, the overall symmetric mean absolute percentage error (sMAPE) of LSTM, LSTM interpretable, TCN, and proposed TFT remained at 29.78%, 31.10%, 36.42%, and 26.46%, respectively. The sMAPE of the TFT has proved that the model has performed better than the other deep learning models.

Keywords