IEEE Access (Jan 2019)

Feature Encoder Guided Generative Adversarial Network for Face Photo-Sketch Synthesis

  • Jieying Zheng,
  • Wanru Song,
  • Yahong Wu,
  • Ran Xu,
  • Feng Liu

DOI
https://doi.org/10.1109/ACCESS.2019.2949070
Journal volume & issue
Vol. 7
pp. 154971 – 154985

Abstract

Read online

Face photo-sketch synthesis often suffers from many problems, such as low clarity, facial distortion, contents loss, texture missing and color inconsistency in the synthesized images. To alleviate these problems, we propose a feature Encoder Guided Generative Adversarial Network (EGGAN) for face photo-sketch synthesis. We adopt the cycle-consistent generative adversarial network with skipped connections as the general framework, which can train the models for both sketch synthesis and photo synthesis simultaneously. The two generators can constrain each other. In addition, a feature auto-encoder is introduced to refine the synthetic results. The feature encoder is trained to explore a latent space between the photo domain and sketch domains, assuming that there exists a uniform feature representation for a photo-sketch pair. Instead of participating in the generation process, the feature encoder is only utilized to guide the training process. Meanwhile, the feature loss and the feature consistency loss between the fake images and real images from the latent space are calculated to prevent the important identity-specific information from missing and reduce the artifacts in the synthesized images. Extensive experiments demonstrate that our method can achieve state-of-the-art performance on public databases both in terms of perceptual quality and quantitative assessments.

Keywords