Machine Learning with Applications (Dec 2023)

Position-dependent partial convolutions for supervised spatial interpolation

  • Hirotaka Hachiya,
  • Kotaro Nagayoshi,
  • Asako Iwaki,
  • Takahiro Maeda,
  • Naonori Ueda,
  • Hiroyuki Fujiwara

Journal volume & issue
Vol. 14
p. 100514

Abstract

Read online

Acquiring continuous spatial data, e.g., spatial ground motion, is essential to assess the damaged area and appropriately assign rescue and medical teams. Therefore, spatial interpolation methods have been developed to estimate the value of unobserved points linearly from neighbor observed values, i.e., inverse distance weighting and Kriging. Meanwhile, realistic spatial continuous environmental data with various scenarios can be generated by 3-D finite difference methods using a high-resolution structure model. These enable to collect supervised data even for unobserved points. Therefore, this paper proposes a framework of supervised spatial interpolation and applies highly advanced deep inpainting methods, where spatially distributed observed points are treated as masked images and non-linearly expanded through convolutional encoder–decoder networks. However, the property of translation invariance would avoid locally fine-grained interpolation because the relation between the target and surrounding observation points varies among regions owing to their topography and subsurface structure. To overcome this issue, this paper proposes introducing position-dependent partial convolution, where kernel weights are adjusted depending on their position on an image based on the trainable position-feature map. The experimental results show the effectiveness of the proposed method, called Position-dependent Deep Inpainting Method, using toy and ground-motion data.

Keywords