IEEE Access (Jan 2024)

Semantic-Aware Guided Low-Light Image Super-Resolution

  • Sheng Ren,
  • Rui Cao,
  • Wenxue Tan,
  • Yayuan Tang

DOI
https://doi.org/10.1109/ACCESS.2024.3403096
Journal volume & issue
Vol. 12
pp. 72408 – 72419

Abstract

Read online

The single image super-resolution based on deep learning has achieved extraordinary performance. However, due to inevitable environmental or technological limitations, some images not only have low resolution but also low brightness. The existing super-resolution methods for restoring images through low-light input may encounter issues such as low brightness and many missing details. In this paper, we propose a semantic-aware guided low-light image super-resolution method. Initially, we present a semantic perception guided super-resolution framework that utilizes the rich semantic prior knowledge of the semantic network module. Through the semantic-aware guidance module, reference semantic features and target image features are fused in a quantitative attention manner, guiding low-light image features to maintain semantic consistency during the reconstruction process. Second, we design a self-calibrated light adjustment module to constrain the convergence consistency of each illumination estimation block by self-calibrated block, improving the stability and robustness of output brightness enhancement features. Third, we design a lightweight super resolution module based on spatial and channel reconstruction convolution, which uses the attention module to further enhances the super-resolution reconstruction capability. Our proposed model surpasses methods such as RDN, RCAN, and NLSN in both qualitative and quantitative analysis of low-light image super-resolution reconstruction. The experiment proves the efficiency and effectiveness of our method.

Keywords