IET Computer Vision (Jun 2024)

A Decoder Structure Guided CNN‐Transformer Network for face super‐resolution

  • Rui Dou,
  • Jiawen Li,
  • Xujie Wan,
  • Heyou Chang,
  • Hao Zheng,
  • Guangwei Gao

DOI
https://doi.org/10.1049/cvi2.12251
Journal volume & issue
Vol. 18, no. 4
pp. 473 – 484

Abstract

Read online

Abstract Recent advances in deep convolutional neural networks have shown improved performance in face super‐resolution through joint training with other tasks such as face analysis and landmark prediction. However, these methods have certain limitations. One major limitation is the requirement for manual marking information on the dataset for multi‐task joint learning. This additional marking process increases the computational cost of the network model. Additionally, since prior information is often estimated from low‐quality faces, the obtained guidance information tends to be inaccurate. To address these challenges, a novel Decoder Structure Guided CNN‐Transformer Network (DCTNet) is introduced, which utilises the newly proposed Global‐Local Feature Extraction Unit (GLFEU) for effective embedding. Specifically, the proposed GLFEU is composed of an attention branch and a Transformer branch, to simultaneously restore global facial structure and local texture details. Additionally, a Multi‐Stage Feature Fusion Module is incorporated to fuse features from different network stages, further improving the quality of the restored face images. Compared with previous methods, DCTNet improves Peak Signal‐to‐Noise Ratio by 0.23 and 0.19 dB on the CelebA and Helen datasets, respectively. Experimental results demonstrate that the designed DCTNet offers a simple yet powerful solution to recover detailed facial structures from low‐quality images.

Keywords