IET Computer Vision (Oct 2015)

Auto‐encoder‐based shared mid‐level visual dictionary learning for scene classification using very high resolution remote sensing images

  • Gong Cheng,
  • Peicheng Zhou,
  • Junwei Han,
  • Lei Guo,
  • Jungong Han

DOI
https://doi.org/10.1049/iet-cvi.2014.0270
Journal volume & issue
Vol. 9, no. 5
pp. 639 – 647

Abstract

Read online

Effective representation and classification of scenes using very high resolution (VHR) remote sensing images cover a wide range of applications. Although robust low‐level image features have been proven to be effective for scene classification, they are not semantically meaningful and thus have difficulty to deal with challenging visual recognition tasks. In this study, the authors propose a new and effective auto‐encoder‐based method to learn a shared mid‐level visual dictionary. This dictionary serves as a shared and universal basis to discover mid‐level visual elements. On the one hand, the mid‐level visual dictionary learnt using machine learning technique is more discriminative and contains rich semantic information, compared with the traditional low‐level visual words. On the other hand, the mid‐level visual dictionary is more robust to occlusions and image clutters. In the authors' scene‐classification scheme, they use discriminative mid‐level visual elements, rather than individual pixels or low‐level image features, to represent images. This new image representation is able to capture much of the high‐level meaning and contents of the image, facilitating challenging remote sensing image scene‐classification tasks. Comprehensive evaluations on a challenging VHR remote sensing images data set and comparisons with state‐of‐the‐art approaches demonstrate the effectiveness and superiority of their study.

Keywords