IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)

Hyperspectral Target Detection-Based 2-D–3-D Parallel Convolutional Neural Networks for Hyperspectral Image Classification

  • Shih-Yu Chen,
  • Kai-Hsun Hsu,
  • Tzu-Hsien Kuo

DOI
https://doi.org/10.1109/JSTARS.2024.3394704
Journal volume & issue
Vol. 17
pp. 9451 – 9469

Abstract

Read online

This article presents a novel hyperspectral target detection (HTD) based two-dimensional (2-D)–three-dimensional (3-D) parallel convolutional neural network (HTD 2D-3D-PCNN) model, which integrates the HTD technique to achieve outstanding performance in hyperspectral image classification. The proposed model effectively leverages both spectral and spatial information through a dual-branch architecture in hyperspectral imaging. In the first branch, HTD is utilized to enhance the spectral features of targets of interest, while suppressing background. The resulting enhanced image is then inputted to a 2D-CNN, augmented with an additional deconvolution layer to highlight spatial characteristics. Concurrently, the second branch employs dimensionality reduction via principal component analysis, and a 3D-CNN is employed to capture both spectral and spatial attributes. Subsequently, the feature maps from both convolutional neural networks (CNNs) are combined and processed through fully connected layers for classification. To validate the effectiveness of the proposed HTD-2D-3D-PCNN, extensive experiments are conducted on five widely used hyperspectral public datasets (Indian Pines, Pavia University, Salinas Scene, Kennedy Space Center, and Botswana) with a consistent training sample ratio of either 10% or 5%. The results show that HTD-2D-3D-PCNN can achieve overall accuracy values of 98.41%, 99.85%, 99.92%, 99.82%, and 98.82% for the respective datasets, surpassing the performance of recent methodologies.

Keywords