IET Image Processing (Feb 2023)

GSGAN: Learning controllable geospatial images generation

  • Xingzhe Su,
  • Yijun Lin,
  • Quan Zheng,
  • Fengge Wu,
  • Changwen Zheng,
  • Junsuo Zhao

DOI
https://doi.org/10.1049/ipr2.12641
Journal volume & issue
Vol. 17, no. 2
pp. 401 – 417

Abstract

Read online

Abstract Compared with natural images, geospatial images cover larger area and have more complex image contents. There are few algorithms for generating controllable geospatial images, and their results are of low quality. In response to this problem, this paper proposes Geospatial Style Generative Adversarial Network to generate controllable and high‐quality geospatial images. Current conditional generators suffer the mode collapse problem in geospatial field. The problem is addressed via a modified mode seeking regularization term with contrastive learning theory. Besides, the discriminator network architecture is modified to process global feature information and texture information of geospatial images. Feature loss in the generator is introduced to stabilize the training process and improve generated image quality. Comprehensive experiments are conducted on UC Merced Land Use Dataset, NWPU‐RESISC45 Dataset, and AID Dataset to evaluate all compared methods. Experiment results show our method outperforms state‐of‐the‐art models. Our method not only generates high‐quality and controllable geospatial images, but also enhances the discriminator to learn better representations.