IEEE Access (Jan 2023)

TP-GAN: Simple Adversarial Network With Additional Player for Dense Depth Image Estimation

  • Andi Hendra,
  • Yasushi Kanazawa

DOI
https://doi.org/10.1109/ACCESS.2023.3272292
Journal volume & issue
Vol. 11
pp. 44176 – 44191

Abstract

Read online

We present a simple yet robust monocular depth estimation technique by synthesizing a depth map image from a single RGB input image using the advantage of generative adversarial networks (GAN). We employ an additional sub-model termed refiner to extract local depth features, then combine it with the global scene information from the generator to improve the GAN’s performance compared to the standard GAN architectural scheme. Notably, the generator is the first player to learn to synthesize depth images. The second player, the discriminator, classifies the generated depth. In the meantime, the third player, the refiner, enhances the final reconstructed depth. Complementing the GAN model, we apply a conditional generative network (cGAN) to lead the generator in mapping the input image to the respective depth representation. We further incorporate a structured similarity (SSIM) as our loss function for the generator and refiner in GAN training. Through extensive experiment validation, we confirmed the performance of our strategy on the publicly indoor NYU Depth v2 and KITTI outdoor data. Experiment results on the NYU depth v2 dataset show that our proposed approach achieves the best performance by 96.0% on threshold accuracy ( $\delta < 1.25^{2}$ ) and the second-best accuracy on all thresholds on the KITTI dataset. We discovered that our proposed method compares favorably to numerous existing monocular depth estimation strategies and demonstrates a considerable improvement in the accuracy of image depth estimation despite its simple network architecture.

Keywords