IEEE Access (Jan 2023)

Remote Sensing Image Scene Classification Based on Head-Tail Global Joint Dual Attention Discrimination Network

  • Lin Wei,
  • Chao Geng,
  • Yuping Yin

DOI
https://doi.org/10.1109/ACCESS.2023.3306083
Journal volume & issue
Vol. 11
pp. 88305 – 88316

Abstract

Read online

High-resolution, multi-channel remote sensing images have complex spatial layout and channel structure. Traditional Convolutional Neural Networks (CNNs) methods do not make full use of the features of remote sensing images, ignoring contextual relevance and globality, resulting in insufficient features discrimination and limiting the accuracy of scene classification. To address these issues, we propose a discriminative network HCA-TSA based on head-tail global joint dual attention mechanism. Considering the characteristics between the spatial dimension and the channel dimension, the channel attention module Head-Channel Attention (HCA) and the spatial attention module Tail-Spatial Attention (TSA) are proposed. This network uses the context information discrimination ability of the Gated Recurrent Unit (GRU) module and the structure of the global layout to learn and output the importance weights of different channels and spatial features, pay more attention to the salient features in the remote sensing image, and ignore the insignificant features, thus improve the discriminative power of feature representation. The proposed network can be connected to any benchmark CNNs, and the whole network structure can be trained end-to-end. Through a comprehensive comparative experiment on three public datasets with large differences, AID, UC-Merced and HSRS-SC, the highest classification accuracies in the experimental results are 97.68%, 99.41% and 98.38%, respectively. The results show that the proposed method can effectively discriminate scenarios and obtain competitive classification results, which verifies the effectiveness of the proposed method.

Keywords