IEEE Access (Jan 2021)

Comparison of Deep Learning Techniques for River Streamflow Forecasting

  • Xuan-Hien Le,
  • Duc-Hai Nguyen,
  • Sungho Jung,
  • Minho Yeon,
  • Giha Lee

DOI
https://doi.org/10.1109/ACCESS.2021.3077703
Journal volume & issue
Vol. 9
pp. 71805 – 71820

Abstract

Read online

Recently, deep learning (DL) models, especially those based on long short-term memory (LSTM), have demonstrated their superior ability in resolving sequential data problems. This study investigated the performance of six models that belong to the supervised learning category to evaluate the performance of DL models in terms of streamflow forecasting. They include a feed-forward neural network (FFNN), a convolutional neural network (CNN), and four LSTM-based models. Two standard models with just one hidden layer—LSTM and gated recurrent unit (GRU)—are used against two more complex models—the stacked LSTM (StackedLSTM) model and the Bidirectional LSTM (BiLSTM) model. The Red River basin—the largest river basin in the north of Vietnam—was adopted as a case study because of its geographic relevance since Hanoi city—the capital of Vietnam—is located downstream of the Red River. Besides, the input data of these models are the observed data at seven hydrological stations on the three main river branches of the Red River system. This study indicates that the four LSTM-based models exhibited considerably better performance and maintained stability than the FFNN and CNN models. However, the complexity of the StackedLSTM and BiLSTM models is not accompanied by performance improvement because the results of the comparison illustrate that their respective performance is not higher than the two standard models—LSTM and GRU. The findings of this study present that LSTM-based models can reach impressive forecasts even in the presence of upstream dams and reservoirs. For the streamflow-forecasting problem, the LSTM and GRU models with a simple architecture (one hidden layer) are sufficient to produce highly reliable forecasts while minimizing the computation time.

Keywords