AIP Advances (Aug 2021)

SMGAN: A self-modulated generative adversarial network for single image dehazing

  • Nian Wang,
  • Zhigao Cui,
  • Yanzhao Su,
  • Chuan He,
  • Yunwei Lan,
  • Aihua Li

DOI
https://doi.org/10.1063/5.0059424
Journal volume & issue
Vol. 11, no. 8
pp. 085227 – 085227-10

Abstract

Read online

Single image dehazing has become a key prerequisite for most high-level computer vision tasks since haze severely degrades the input images. Traditional prior-based methods dehaze images by some assumptions concluded from haze-free images, which recover high-quality results but always cause some halos or color distortion. Recently, many methods have been using convolutional neural networks to learn the haze-relevant features and then retrieve the original images. These learning-based methods achieve better performance in synthetic scenes but can hardly restore a clear image with discriminative texture when applied to real-world images, mainly because these networks are trained on synthetic datasets. To solve these problems, a self-modulated generative adversarial network for single image dehazing named SMGAN is proposed. The SMGAN inputs prior-dehazed images into a parameter-shared encoder to produce some latent information of these dehazed images. During the hazy image decoding process, the latent information is sent to self-modulated batch normalization layers, which makes the network fit in real haze removal. Moreover, consider that there are some over-enhanced regions in the guidance images, and a refine module is proposed to alleviate the negative information. The proposed SMGAN combines the advantages of prior-based methods and learning-based methods, which provides superior performance compared with the state-of-the-art methods on both synthetic and real-word datasets.