Remote Sensing (Aug 2024)

A Global Spatial-Spectral Feature Fused Autoencoder for Nonlinear Hyperspectral Unmixing

  • Mingle Zhang,
  • Mingyu Yang,
  • Hongyu Xie,
  • Pinliang Yue,
  • Wei Zhang,
  • Qingbin Jiao,
  • Liang Xu,
  • Xin Tan

DOI
https://doi.org/10.3390/rs16173149
Journal volume & issue
Vol. 16, no. 17
p. 3149

Abstract

Read online

Hyperspectral unmixing (HU) aims to decompose mixed pixels into a set of endmembers and corresponding abundances. Deep learning-based HU methods are currently a hot research topic, but most existing unmixing methods still rely on per-pixel training or employ convolutional neural networks (CNNs), which overlook the non-local correlations of materials and spectral characteristics. Furthermore, current research mainly focuses on linear mixing models, which limits the feature extraction capability of deep encoders and further improvement in unmixing accuracy. In this paper, we propose a nonlinear unmixing network capable of extracting global spatial-spectral features. The network is designed based on an autoencoder architecture, where a dual-stream CNNs is employed in the encoder to separately extract spectral and local spatial information. The extracted features are then fused together to form a more complete representation of the input data. Subsequently, a linear projection-based multi-head self-attention mechanism is applied to capture global contextual information, allowing for comprehensive spatial information extraction while maintaining lightweight computation. To achieve better reconstruction performance, a model-free nonlinear mixing approach is adopted to enhance the model’s universality, with the mixing model learned entirely from the data. Additionally, an initialization method based on endmember bundles is utilized to reduce interference from outliers and noise. Comparative results on real datasets against several state-of-the-art unmixing methods demonstrate the superior of the proposed approach.

Keywords