IEEE Access (Jan 2024)

Conditional-GAN-Based Face Inpainting Approaches With Symmetry and View-Degree Utilization

  • Tzung-Pei Hong,
  • Jin-Hang Wu,
  • Ja-Hwung Su,
  • Tang-Kai Yin

DOI
https://doi.org/10.1109/ACCESS.2024.3417442
Journal volume & issue
Vol. 12
pp. 87467 – 87478

Abstract

Read online

Recently, image inpainting has been proposed as a solution for restoring the polluted image in the field of computer vision. Further, face inpainting is a subfield of image inpainting, which refers to a set of image editing algorithms re-conducting the missing regions in face smoothly. Actually, face inpainting is more challenging than general image inpainting because it needs more face structure information. Although a number of past studies were proposed for face inpainting by using face segmentation, face edge and face topology, there is some important information ignored, such as geometric and symmetric properties. Based on such concepts, in this paper, we propose a two-stage face inpainting method called CGAN (Conditional Generative Adversarial Network) which integrates face landmarks and Generative Adversarial Network (called GAN). In the first stage, the face landmark is predicted as the condition, providing GAN with important information of geometry and symmetry. The main idea in this stage is to dynamically adjust the loss by the proposed view degree. Accordingly, the masked face image and the corresponding face landmark are used as conditions input to the GAN in the second stage. Finally, the missing-regions are inpainted by the proposed CGAN. To reveal the effectiveness of proposed method, a number of evaluations were conducted on real datasets. The experimental results show that, the proposed method predicts a better face landmark by information of geometric structures and symmetric outlooks, and thereupon the proposed CGAN reconstructs the missing regions superior to the compared methods.

Keywords