PLoS ONE (Jan 2023)

Water level prediction using soft computing techniques: A case study in the Malwathu Oya, Sri Lanka.

  • Namal Rathnayake,
  • Upaka Rathnayake,
  • Tuan Linh Dang,
  • Yukinobu Hoshino

DOI
https://doi.org/10.1371/journal.pone.0282847
Journal volume & issue
Vol. 18, no. 4
p. e0282847

Abstract

Read online

Hydrologic models to simulate river flows are computationally costly. In addition to the precipitation and other meteorological time series, catchment characteristics, including soil data, land use, land cover, and roughness, are essential in most hydrologic models. The unavailability of these data series challenged the accuracy of simulations. However, recent advances in soft computing techniques offer better approaches and solutions at less computational complexity. These require a minimum amount of data, while they reach higher accuracies depending on the quality of data sets. The Gradient Boosting Algorithms and Adaptive Network-based Fuzzy Inference System (ANFIS) are two such systems that can be used in simulating river flows based on the catchment rainfall. In this paper, the computational capabilities of these two systems were tested in simulated river flows by developing the prediction models for Malwathu Oya in Sri Lanka. The simulated flows were then compared with the ground-measured river flows for accuracy. Correlation of coefficient (R), Per cent-Bias (bias), Nash Sutcliffe Model efficiency (NSE), Mean Absolute Relative Error (MARE), Kling-Gupta Efficiency (KGE), and Root mean square error (RMSE) were used as the comparative indices between Gradient Boosting Algorithms and Adaptive Network-based Fuzzy Inference Systems. Results of the study showcased that both systems can simulate river flows as a function of catchment rainfalls; however, the Cat gradient Boosting algorithm (CatBoost) has a computational edge over the Adaptive Network Based Fuzzy Inference System (ANFIS). The CatBoost algorithm outperformed other algorithms used in this study, with the best correlation score for the testing dataset having 0.9934. The extreme gradient boosting (XGBoost), Light gradient boosting (LightGBM), and Ensemble models scored 0.9283, 0.9253, and 0.9109, respectively. However, more applications should be investigated for sound conclusions.