IEEE Access (Jan 2019)

Attention-Based Dual-Scale CNN In-Loop Filter for Versatile Video Coding

  • Ming-Ze Wang,
  • Shuai Wan,
  • Hao Gong,
  • Ming-Yang Ma

DOI
https://doi.org/10.1109/ACCESS.2019.2944473
Journal volume & issue
Vol. 7
pp. 145214 – 145226

Abstract

Read online

As the upcoming video coding standard, Versatile Video Coding (i.e., VVC) achieves up to 30% Bjøntegaard delta bit-rate (BD-rate) reduction compared with High Efficiency Video Coding (H.265/HEVC). To eliminate or alleviate different kinds of compression artifacts like blocking, ringing, blurring and contouring effects, three in-loop filters, i.e. de-blocking filter (DBF), sample adaptive offset (SAO) and adaptive loop filter (ALF), have been involved in VVC. Recently, Convolutional Neural Network (CNN) has attracted tremendous attention and shows great potential in many tasks in image processing. In this work, we design a CNN-based in-loop filter as an integrated single-model solution which is adaptive to almost any scenarios in video coding. An architecture named as ADCNN (i.e., Attention based Dual-scale CNN) with an attention based processing block is proposed to reduce artifacts of I frames and B frames, which take advantage of informative priors such as the quantization parameter (QP) and partitioning information. Different from existing CNN-based filtering methods, which are mainly designed for the luma component and may need to train different models for different QPs, the proposed filter is adapted to different QPs and different frame types, and all the components (i.e., both luma and chroma) are processed simultaneously with feature exchange and fusion between components for information supplementary. Experimental results show that the proposed ADCNN filter can achieve 6.54%, 13.27%, 15.72% BD-rate savings for Y, U, V respectively under the all intra configuration and 2.81%, 7.86%, 8.60% BD-rate savings under the random access configuration. It can be used to replace all the conventional in-loop filters and also outperforms them without increase in encoding time.

Keywords