IET Computer Vision (Dec 2021)

CNN‐combined graph residual network with multilevel feature fusion for hyperspectral image classification

  • Wenhui Guo,
  • Guixun Xu,
  • Weifeng Liu,
  • Baodi Liu,
  • Yanjiang Wang

DOI
https://doi.org/10.1049/cvi2.12073
Journal volume & issue
Vol. 15, no. 8
pp. 592 – 607

Abstract

Read online

Abstract The application of graph convolutional networks (GCN) in hyperspectral image (HSI) classification has become a promising method, thanks to its flexible convolution operation in any irregular image region. For the classification of HSI, GCN can extract more superpixel‐level features with a topological structure, in comparison to the traditional convolutional neural networks (CNNs) using fixed square kernels distilling pixel‐level features. To fully leverage the different levels of features, this study proposes a novel deep network referred to as a CNN‐combined graph residual network (C2GRN), which integrates the multilevel graph residual module and spectral‐spatial features continuous learning module. During the extraction of topology information using the former module, HSI pixels are divided into superpixels and served as input nodes of the module to reduce the computational complexity and obtain the multilevel spatial relevance between adjacent superpixels. Besides, for the latter module, the spectral‐spatial features are learnt continuously, which could obtain the finer pixel‐level features. Finally, the captured spectral‐spatial features of different levels are concatenated. This strategy could not only adequately utilize the correlation and difference of adjacent spatial but also obtain the finer and more valuable spectral‐spatial information, which makes a significant boost in the HSI classification. Additionally, the experiment results demonstrate the superiority and availability of the C2GRN on three benchmark datasets of HSI, compared with the state‐of‐the‐art methods for the classification of HSI.

Keywords