IEEE Access (Jan 2019)
Robust Image Translation and Completion Based on Dual Auto-Encoder With Bidirectional Latent Space Regression
Abstract
Automated image translation and completion is a subject of keen interest due to their impact on image representation, interpretation, and enhancement. To date, a conditional or a dual adversarial framework with a convolutional auto-encoder embedded as a generator is known to offer the best accuracy in image translation. However, although the frequency is excellent, the adversarial framework may suffer from a lack of generality, i.e., the accuracy drops when translating incomplete and corrupted data given as untrained noisy input data. This paper proposes an approach to robust image-to-image translation that offers a high level of generality while keeping accuracy high as well. The proposed approach is referred to here as a dual auto-encoder with bidirectional latent space regression or Bi-directionally Associative DualAE, for short. The proposed BA-DualAE is configured with two auto-encoders the individual latent spaces of which are tightly associated by a bidirectional regression network. Once the two auto-encoders are trained independently for their respective domains, and then, the bidirectional regression network is trained to learn the general association between data pairs. With its capability of robust and bidirectional image translation, BA-DualAE performed direct image completion with no iterative search. The experiments with photo-sketch datasets demonstrated that the proposed BA-DualAE is highly robust under incomplete or corrupted data conditions and is far superior to adversarial frameworks in terms of generality and robustness.
Keywords