IEEE Access (Jan 2024)

Enhancing Upscaled Image Resolution Using Hybrid Generative Adversarial Network-Enabled Frameworks

  • R. Geetha,
  • G. Belshia Jebamalar,
  • S. Arumai Shiney,
  • Nhu-Ngoc Dao,
  • Hyeonjoon Moon,
  • Sungrae Cho

DOI
https://doi.org/10.1109/ACCESS.2024.3367763
Journal volume & issue
Vol. 12
pp. 27784 – 27793

Abstract

Read online

Upscaling images typically depends on facial features, such as facial geometry details or references, to rebuild reasonable details. The low-quality input images cannot offer accurate geometric details, and the high-quality details are obscure, limiting performance in practical conditions. This work addresses the problem using an enhanced version of the generative adversarial network (GAN) model that uses rich and varied facial features incorporated into the pretrained StyleGAN2 for face restoration. The generated facial features are integrated into the facial restoration process using spatial feature transform layers to achieve facial details and color quality to improve reliability. In particular, the image background is upsampled using a modified enhanced super-resolution GAN trained in parallel to remove noise while recreating a high-resolution image from low-quality input. The upscale image resolution GAN is used to enlarge the image for a clear view without sacrificing the original image quality. Finally, by combining the upscaled background and restored faces, the hybrid GAN-enabled framework can obtain high-resolution upscaled images. An experimental comparison has been conducted using the Flickr-Faces-HQ Dataset collected from Kaggle. The findings indicate that the suggested framework outperforms existing methods in terms of reconstruction, adversarial, and facial component loss metrics as well as similarity indexes.

Keywords