IEEE Access (Jan 2019)

Adversarially Regularized U-Net-based GANs for Facial Attribute Modification and Generation

  • Jiayuan Zhang,
  • Ao Li,
  • Yu Liu,
  • Minghui Wang

DOI
https://doi.org/10.1109/ACCESS.2019.2926633
Journal volume & issue
Vol. 7
pp. 86453 – 86462

Abstract

Read online

Modifying and generating facial images with desired attributes are two important and highly related tasks in the field of computer vision. Some current methods can take advantage of their relationship and use a unified model to handle them simultaneously. However, producing high visual quality images on both tasks is still a challenge. To tackle this issue, we propose a novel model called adversarially regularized U-net (ARU-net)-based generative adversarial networks (ARU-GANs). The ARU-net is the major part of the ARU-GAN and is inspired by the design principle of U-net. It uses skip connections to pass different-level features from encoder to decoder, which preserves sufficient attribute-independent details for the modification task. Besides, this U-net-like architecture employs an adversarial regularization term to guide the distribution of latent representation to match the prior distribution, which guarantees to generate meaningful faces from this prior. We also propose a joint training technique for the ARU-GAN, which enables the facial attribute modification and generation tasks to learn together during training. We perform experiments on celebfaces attributes (CelebA) dataset and make visual analysis and quantitative evaluation on both tasks, which demonstrates that our model can successfully produce high visual quality facial images. Also, the results show that learning two tasks jointly can lead to performance improvement compared with learning them individually. At last, we further validate the effectiveness of our method by making an ablation study and experimenting on another dataset.

Keywords