IEEE Access (Jan 2018)

Recognizing Facial Sketches by Generating Photorealistic Faces Guided by Descriptive Attributes

  • Xinzhe Li,
  • Xiao Yang,
  • Hang Su,
  • Qin Zhou,
  • Shibao Zheng

DOI
https://doi.org/10.1109/ACCESS.2018.2883463
Journal volume & issue
Vol. 6
pp. 77568 – 77580

Abstract

Read online

Recognizing or retrieve a face based on its sketch-photo similarity has important applications in law enforcement and public security. While many existing methods focus on recognizing facial sketch from image-based queries, this setting has the major limitations in practice, since the abstract sketch only captures sparse global structure of the human face. To address this issue, in this paper, we propose to bridge the gap between the sketch-photo pair by “translating”the abstract visual sketch into a photorealistic face with the help of descriptive attributes. Specifically, we propose an improved multi-modal conditional generative adversarial network (MMC-GAN) to jointly utilize the complementary information of visual sketches and semantic facial attributes to reduce the uncertainties of the facial image generation. A fusion network is introduced to better leverage the information from different modalities (visual sketch and semantic attributes). In order to improve the details of the generated facial images, we adopt a two-path generator structure in which the global feature and the local feature of human faces are learned in parallel. An identitypreserving constraint is further introduced to enhance the identity consistency between the sketches and facial images. Extensive experiments demonstrate that we can effectively manipulate the face image generation by varying the input facial attributes. Besides, the generated photorealistic face image is validated to improve the sketch-photo face recognition and retrieval.

Keywords