IEEE Access (Jan 2021)
Compression Artifacts Reduction Using Fusion of Multiple Restoration Networks
Abstract
Lossy video compression achieves coding gains at the expense of the quality loss of the decoded images. Owing to the success of deep learning techniques, especially convolutional neural networks (CNNs), many compression artifacts reduction (CAR) techniques have been used to significantly improve the quality of decoded images by applying CNNs which are trained to predict the original artifact-free images from the decoded images. Most existing video compression standards control the compression ratio using a quantization parameter (QP), so the quality of the decoded images is strongly QP-dependent. Training individual CNNs for predetermined QPs is one of the common approaches to dealing with different levels of compression artifacts. However, compression artifacts are also dependent on the local characteristics of an image. Therefore, a CNN trained for specific QP cannot fully remove the compression artifacts of all images, even those encoded using the same QP. In this paper, we introduce a pixel-precise network selection network (PNSNet). From multiple reconstructed images obtained using multiple QP-specific CAR networks, PNSNet is trained to find the best CAR network for each pixel. The output of PNSNet is then used as an explicit spatial attention channel for an image fusion network that combines multiple reconstructed images. Experimental results demonstrated that the quality of decoded images can be significantly improved by the proposed multiple CAR network fusion method.
Keywords