IEEE Access (Jan 2021)
Haze Relevant Feature Attention Network for Single Image Dehazing
Abstract
Single image dehazing methods based on deep learning technique have made great achievements in recent years. However, some methods recover haze-free images by estimating the so-called transmission map and global atmospheric light, which are strictly limited to the simplified atmospheric scattering model and do not give full play to the advantages of deep learning to fit complex functions. Other methods require pairs of training data, whereas in practice pairs of hazy and corresponding haze-free images are difficult to obtain. To address these problems, inspired by cycle generative adversarial model, we have developed an end-to-end haze relevant feature attention network for single image dehazing, which does not require paired training images. Specifically, we make explicit use of haze relevant feature by embedding an attention module into a novel dehazing generator that combines an encoder-decoder structure with dense blocks. The constructed network adopts a novel strategy which derives attention maps from several hand-designed priors, such as dark channel, color attenuation, maximum contrast and so on. Since haze is usually unevenly distributed across an image, the attention maps could serve as a guidance of the amount of haze at image pixels. Meanwhile, dense blocks can maximize information flow along features from different levels. Furthermore, color loss is proposed to avoid color distortion and generate visually better haze-free images. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods.
Keywords