ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (May 2022)

PSCNET: EFFICIENT RGB-D SEMANTIC SEGMENTATION PARALLEL NETWORK BASED ON SPATIAL AND CHANNEL ATTENTION

  • S. Q. Du,
  • S. Q. Du,
  • S. J. Tang,
  • S. J. Tang,
  • W. X. Wang,
  • W. X. Wang,
  • X. M. Li,
  • X. M. Li,
  • Y. H. Lu,
  • R. Z. Guo,
  • R. Z. Guo

DOI
https://doi.org/10.5194/isprs-annals-V-1-2022-129-2022
Journal volume & issue
Vol. V-1-2022
pp. 129 – 136

Abstract

Read online

RGB-D semantic segmentation algorithm is a key technology for indoor semantic map construction. The traditional RGB-D semantic segmentation network, which always suffer from redundant parameters and modules. In this paper, an improved semantic segmentation network PSCNet is designed to reduce redundant parameters and make models easier to implement. Based on the DeepLabv3+ framework, we have improved the original model in three ways, including attention module selection, backbone simplification, and Atrous Spatial Pyramid Pooling (ASPP) module simplification. The research proposes three improvement ideas to address these issues: using spatial-channel co-attention, removing the last module from Depth Backbone, and redesigning WW-ASPP by Depthwise convolution. Compared to Deeplabv3+, the proposed PSCNet are approximately the same number of parameters, but with a 5% improvement in MIoU. Meanwhile, PSCNet achieved inference at a rate of 47 FPS on RTX3090, which is much faster than state-of-the-art semantic segmentation networks.