Complex & Intelligent Systems (Sep 2022)

Multi-scale progressive blind face deblurring

  • Hao Zhang,
  • Canghong Shi,
  • Xian Zhang,
  • Linfeng Wu,
  • Xiaojie Li,
  • Jing Peng,
  • Xi Wu,
  • Jiancheng Lv

DOI
https://doi.org/10.1007/s40747-022-00865-9
Journal volume & issue
Vol. 9, no. 2
pp. 1439 – 1453

Abstract

Read online

Abstract Blind face deblurring aims to recover a sharper face from its unknown degraded version (i.e., different motion blur, noise). However, most previous works typically rely on degradation facial priors extracted from low-quality inputs, which generally leads to unlifelike deblurring results. In this paper, we propose a multi-scale progressive face-deblurring generative adversarial network (MPFD-GAN) that requires no facial priors to generate more realistic multi-scale deblurring results by one feed-forward process. Specifically, MPFD-GAN mainly includes two core modules: the feature retention module and the texture reconstruction module (TRM). The former can capture non-local similar features by full advantage of the different receptive fields, which facilitates the network to recover the complete structure. The latter adopts a supervisory attention mechanism that fully utilizes the recovered low-scale face to refine incoming features at every scale before propagating them further. Moreover, TRM extracts the high-frequency texture information from the recovered low-scale face by the Laplace operator, which guides subsequent steps to progressively recover faithful face texture details. Experimental results on the CelebA, UTKFace and CelebA-HQ datasets demonstrate the effectiveness of the proposed network, which achieves better accuracy and visual quality against state-of-the-art methods.

Keywords