Applied Sciences (Jun 2024)

Synthetic Image Generation Using Conditional GAN-Provided Single-Sample Face Image

  • Muhammad Ali Iqbal,
  • Waqas Jadoon,
  • Soo Kyun Kim

DOI
https://doi.org/10.3390/app14125049
Journal volume & issue
Vol. 14, no. 12
p. 5049

Abstract

Read online

The performance of facial recognition systems significantly decreases when faced with a lack of training images. This issue is exacerbated when there is only one image per subject available. Probe images may contain variations such as illumination, expression, and disguise, which are difficult to recognize accurately. In this work, we present a model that generates six facial variations from a single neutral face image. Our model is based on a CGAN, designed to produce six highly realistic facial expressions from one neutral face image. To evaluate the accuracy of our approach comprehensively, we employed several pre-trained models (VGG-Face, ResNet-50, FaceNet, and DeepFace) along with a custom CNN model. Initially, these models achieved only about 76% accuracy on single-sample neutral images, highlighting the SSPP challenge. However, after fine-tuning on the synthetic expressions generated by our CGAN from these single images, their accuracy increased significantly to around 99%. Our method has proven highly effective in addressing SSPP issues, as evidenced by the significant improvement achieved.

Keywords