Sensors (May 2022)

Application of Improved CycleGAN in Laser-Visible Face Image Translation

  • Mingyu Qin,
  • Youchen Fan,
  • Huichao Guo,
  • Mingqian Wang

DOI
https://doi.org/10.3390/s22114057
Journal volume & issue
Vol. 22, no. 11
p. 4057

Abstract

Read online

CycleGAN is widely used in various image translations, such as thermal-infrared–visible-image translation, near-infrared–visible-image translation, and shortwave-infrared–visible-image translation. However, most image translations are used for infrared-to-visible translation, and the wide application of laser imaging has an increasingly strong demand for laser–visible image translation. In addition, the current image translation is mainly aimed at frontal face images, which cannot be effectively utilized to translate faces at a certain angle. In this paper, we construct a laser-visible face mapping dataset; in case of the gradient dispersion of the objective function of the original adversarial loss, the least squares loss function is used to replace the cross-entropy loss function and an identity loss function is added to strengthen the network constraints on the generator. The experimental results indicate that the SSIM value of the improved model increases by 1.25%, 8%, 0, 8%, the PSNR value is not much different, and the FID value decreases by 11.22, 12.85, 43.37 and 72.19, respectively, compared with the CycleGAN, Pix2pix, U-GAN-IT and StarGAN models. In the profile image translation, in view of the poor extraction effect of CycleGAN’s original feature extraction module ResNet, the RRDB module is used to replace it based on the first improvement. The experimental results show that, compared with the CycleGAN, Pix2pix, U-GAN-IT, StarGAN and the first improved model, the SSIM value of the improved model increased by 3.75%, 10.67%, 2.47%, 10.67% and 2.47%, respectively; the PSNR value increased by 1.02, 2.74, 0.32, 0.66 and 0.02, respectively; the FID value reduced by 26.32, 27.95, 58.47, 87.29 and 15.1, respectively. Subjectively, the contour features and facial features were better conserved.

Keywords