IEEE Access (Jan 2020)

An Adaptive Multiscale Fusion Network Based on Regional Attention for Remote Sensing Images

  • Wanzhen Lu,
  • Longxue Liang,
  • Xiaosuo Wu,
  • Xiaoyu Wang,
  • Jiali Cai

DOI
https://doi.org/10.1109/ACCESS.2020.3000425
Journal volume & issue
Vol. 8
pp. 107802 – 107813

Abstract

Read online

With the widespread application of semantic segmentation in remote sensing images with high-resolution, how to improve the accuracy of segmentation becomes a research goal in the remote sensing field. An innovative Fully Convolutional Network (FCN) is proposed based on regional attention for improving the performance of the semantic segmentation framework for remote sensing images. The proposed network follows the encoder-decoder architecture of semantic segmentation and includes the following three strategies to improve segmentation accuracy. The enhanced GCN module is applied to capture the semantic features of remote sensing images. MGFM is proposed to capture different contexts by sampling at different densities. Furthermore, RAM is offered to assign large weights to high-value information in different regions of the feature map. Our method is assessed on two datasets: ISPRS Potsdam dataset and CCF dataset. The results indicate that our model with those strategies outperforms baseline models (DCED50) concerning F1, mean IoU and PA, 10.81%,19.11%, and 11.36% on the Potsdam dataset and 29.26%, 27.64% and 13.57% on the CCF dataset.

Keywords