Machine Learning with Applications (Jun 2023)

Encoder–decoder-based image transformation approach for integrating multiple spatial forecasts

  • Hirotaka Hachiya,
  • Yusuke Masumoto,
  • Atsushi Kudo,
  • Naonori Ueda

Journal volume & issue
Vol. 12
p. 100473

Abstract

Read online

As the damage caused by heavy rainfall worsens, there is a growing demand for improved forecasts. One practical approach to address this issue is the linear integration of multiple existing forecasts, which allows for visualizing the contribution of each forecast at different locations. However, current methods such as arithmetic and Bayesian averages utilize a single weight shared across the entire space, making it difficult to account for local variations in importance. Additionally, while U-Net-based spatial forecasts have been proposed, they are limited to short-term predictions and do not facilitate the visualization of individual forecast contributions due to their non-linear processes. To overcome these challenges, we propose a new integration framework based on U-Net image transformation. This framework generates weight images that dynamically integrate forecasts based on both time and location. To effectively handle large and imbalanced precipitation data, we introduce novel extensions to the U-Net model. These extensions address heavily imbalanced precipitation data and enable position and time-dependent integration. Experimental results using real precipitation forecast data in Japan demonstrate that our proposed method outperforms existing integration methods.

Keywords