Remote Sensing (Jan 2023)

Shallow-to-Deep Spatial–Spectral Feature Enhancement for Hyperspectral Image Classification

  • Lijian Zhou,
  • Xiaoyu Ma,
  • Xiliang Wang,
  • Siyuan Hao,
  • Yuanxin Ye,
  • Kun Zhao

DOI
https://doi.org/10.3390/rs15010261
Journal volume & issue
Vol. 15, no. 1
p. 261

Abstract

Read online

Since Hyperspectral Images (HSIs) contain plenty of ground object information, they are widely used in fine-grain classification of ground objects. However, some ground objects are similar and the number of spectral bands is far higher than the number of the ground object categories. Therefore, it is hard to deeply explore the spatial–spectral joint features with greater discrimination. To mine the spatial–spectral features of HSIs, a Shallow-to-Deep Feature Enhancement (SDFE) model with three modules based on Convolutional Neural Networks (CNNs) and Vision-Transformer (ViT) is proposed. Firstly, the bands containing important spectral information are selected using Principal Component Analysis (PCA). Secondly, a two-layer 3D-CNN-based Shallow Spatial–Spectral Feature Extraction (SSSFE) module is constructed to preserve the spatial and spectral correlations across spaces and bands at the same time. Thirdly, to enhance the nonlinear representation ability of the network and avoid the loss of spectral information, a channel attention residual module based on 2D-CNN is designed to capture the deeper spatial–spectral complementary information. Finally, a ViT-based module is used to extract the joint spatial–spectral features (SSFs) with greater robustness. Experiments are carried out on Indian Pines (IP), Pavia University (PU) and Salinas (SA) datasets. The experimental results show that better classification results can be achieved by using the proposed feature enhancement method as compared to other methods.

Keywords