IEEE Access (Jan 2024)

Typical Ground Object Recognition in Desert Areas Based on DYDCNet: A Case Study in the Circum-Tarim Region, Xinjiang, China

  • Junfu Fan,
  • Yu Gao,
  • Zongwen Shi,
  • Ping Li,
  • Guangwei Sun

DOI
https://doi.org/10.1109/ACCESS.2024.3388564
Journal volume & issue
Vol. 12
pp. 55800 – 55813

Abstract

Read online

Automatic feature semantic segmentation of remote sensing images is an extremely critical research direction in the field of geographic information science. Especially in the vast and complex desert area, the wide spatial distribution of surface features, complex feature texture characteristics and uneven sample classification bring great challenges to the recognition and segmentation of features. In response to the question, we propose an innovative semantic segmentation network scheme, which is a network that combines dynamic convolutional decomposition feature extraction and multi-scale deformable convolutional techniques (referred to as DYDCNet). This network first introduces dynamic convolutional decomposition based on the attention mechanism and uses a convolutional weight matrix with dynamics to optimize the feature extraction process, which significantly reduces the network parameters and improves the feature extraction efficiency. Subsequently, a deformable convolution technique is used to fuse the null convolution with multiple expansion rates to extend the sensory field and realize feature extraction at different scales. Further, the final segmentation results are refined and optimized by an encoder-decoder architecture. The combination of this series of innovations enables DYDCNet to significantly improve the prediction speed and segmentation accuracy when processing desert region images. Experimental results show that the network has excellent performance on datasets specifically designed for desert features, with an average intersection and merger ratio of 87.75% and an overall accuracy of 91.35%, which outperforms existing mainstream semantic segmentation networks.

Keywords