IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)

Uncertainty-Guided Segmentation Network for Geospatial Object Segmentation

  • Hongyu Jia,
  • Wenwu Yang,
  • Lin Wang,
  • Haolin Li

DOI
https://doi.org/10.1109/JSTARS.2024.3361693
Journal volume & issue
Vol. 17
pp. 5824 – 5833

Abstract

Read online

Geospatial objects pose significant challenges, including dense distribution, substantial interclass variations, and minimal intraclass variations. These complexities make achieving precise foreground object segmentation in high-resolution remote sensing images highly challenging. Current segmentation approaches often rely on the standard encoder–decoder architecture to extract object-related information, but overlook the inherent uncertainty issues that arise during the process. In this article, we aim to enhance segmentation by introducing an uncertainty-guided decoding mechanism and propose the uncertainty-guided segmentation network (UGSNet). Specifically, building upon the conventional encoder–decoder architecture, we initially employ the pyramid vision transformer to extract multilevel features containing extensive long-range information. We then introduce an uncertainty-guided decoding mechanism, addressing both epistemic and aleatoric uncertainties, to progressively refine segmentation with higher certainty at each level. With this uncertainty-guided decoding mechanism, our UGSNet achieves accurate geospatial object segmentation. To validate the effectiveness of UGSNet, we conduct extensive experiments on the large-scale ISAID dataset, and the results unequivocally demonstrate the superiority of our method over other state-of-the-art segmentation methods.

Keywords