IEEE Access (Jan 2019)

Super-Resolution Integrated Building Semantic Segmentation for Multi-Source Remote Sensing Imagery

  • Zhiling Guo,
  • Guangming Wu,
  • Xiaoya Song,
  • Wei Yuan,
  • Qi Chen,
  • Haoran Zhang,
  • Xiaodan Shi,
  • Mingzhou Xu,
  • Yongwei Xu,
  • Ryosuke Shibasaki,
  • Xiaowei Shao

DOI
https://doi.org/10.1109/ACCESS.2019.2928646
Journal volume & issue
Vol. 7
pp. 99381 – 99397

Abstract

Read online

Multi-source remote sensing imagery has become widely accessible owing to the development of data acquisition systems. In this paper, we address the challenging task of the semantic segmentation of buildings via multi-source remote sensing imagery with different spatial resolutions. Unlike previous works that mainly focused on optimizing the segmentation model, which did not enable the severe problems caused by the unaligned resolution between the training and testing data to be fundamentally solved, we propose to integrate SR techniques with the existing framework to enhance the segmentation performance. The feasibility of the proposed method was evaluated by utilizing representative multi-source study materials: high-resolution (HR) aerial and low-resolution (LR) panchromatic satellite imagery as the training and testing data, respectively. Instead of directly conducting building segmentation from the LR imagery by using the model trained using the HR imagery, the deep learning-based super-resolution (SR) model was first adopted to super-resolved LR imagery into SR space, which could mitigate the influence of the difference in resolution between the training and testing data. The experimental results obtained from the test area in Tokyo, Japan, demonstrate that the proposed SR-integrated method significantly outperforms that without SR, improving the Jaccard index and kappa by approximately 19.01% and 19.10%, respectively. The results confirmed that the proposed method is a viable tool for building semantic segmentation, especially when the resolution is unaligned.

Keywords