ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Dec 2023)

REVERSE DOMAIN ADAPTATION FOR INDOOR CAMERA POSE REGRESSION

  • D. Acharya,
  • K. Khoshelham,
  • K. Khoshelham

DOI
https://doi.org/10.5194/isprs-annals-X-1-W1-2023-453-2023
Journal volume & issue
Vol. X-1-W1-2023
pp. 453 – 460

Abstract

Read online

Synthetic images have been used to mitigate the scarcity of annotated data for training deep learning approaches, followed by domain adaptation that reduces the gap between synthetic and real images. One such approach is using Generative Adversarial Networks (GANs) such as CycleGAN to bridge the domain gap where the synthetic images are translated into real-looking synthetic images that are used to train the deep learning models. In this article, we explore the less intuitive alternate strategy for domain adaption in the reverse direction; i.e., real-to-synthetic adaptation. We train the deep learning models with synthetic data directly, and then during inference we apply domain adaptation to convert the real images to synthetic-looking real images using CycleGAN. This strategy reduces the amount of data conversion required during the training, can potentially generate artefact-free images compared to the harder synthetic-to-real case, and can improve the performance of deep learning models. We demonstrate the success of this strategy in indoor localisation by experimenting with camera pose regression. The experimental results indicate an improvement in localisation accuracy is observed with the proposed domain adaptation as compared to the synthetic-to-real adaptation.