Journal of Flood Risk Management (Jun 2024)
Beyond a fixed number: Investigating uncertainty in popular evaluation metrics of ensemble flood modeling using bootstrapping analysis
Abstract
Abstract Evaluation of the performance of flood models is a crucial step in the modeling process. Considering the limitations of single statistical metrics, such as uncertainty bounds, Nash Sutcliffe efficiency, Kling Gupta efficiency, and the coefficient of determination, which are widely used in the model evaluation, the inherent properties and sampling uncertainty in these metrics are demonstrated. A comprehensive evaluation is conducted using an ensemble of one‐dimensional Hydrologic Engineering Center's River Analysis System (HEC‐RAS) models, which account for the uncertainty associated with the channel roughness and upstream flow input, of six reaches located in Indiana and Texas of the United States. Specifically, the effects of different prior distributions of the uncertainty sources, multiple high‐flow scenarios, and various types of measurement errors in observations on the evaluation metrics are investigated using bootstrapping. Results show that the model performances based on the uniform and normal priors are comparable. The statistical distributions of all the evaluation metrics in this study are significantly different under different high‐flow scenarios, thus suggesting that the metrics should be treated as “random” variables due to both aleatory and epistemic uncertainties and conditioned on the specific flow periods of interest. Additionally, the white‐noise error in observations has the least impact on the metrics.
Keywords