IEEE Open Journal of Signal Processing (Jan 2021)

A CNN-Based Prediction-Aware Quality Enhancement Framework for VVC

  • Fatemeh Nasiri,
  • Wassim Hamidouche,
  • Luce Morin,
  • Nicolas Dhollande,
  • Gildas Cocherel

DOI
https://doi.org/10.1109/OJSP.2021.3092598
Journal volume & issue
Vol. 2
pp. 466 – 483

Abstract

Read online

This paper presents a framework for Convolutional Neural Network (CNN)-based quality enhancement task, by taking advantage of coding information in the compressed video signal. The motivation is that normative decisions made by the encoder can significantly impact the type and strength of artifacts in the decoded images. In this paper, the main focus has been put on decisions defining the prediction signal in intra and inter frames. This information has been used in the training phase as well as input to help the process of learning artifacts that are specific to each coding type. Furthermore, to retain a low memory requirement for the proposed method, one model is used for all Quantization Parameters (QPs) with a Quantization Parameter (QP)-map, which is also shared between luma and chroma components. In addition to the Post Processing (PP) approach, the In-Loop Filtering (ILF) codec integration has also been considered, where the characteristics of the Group of Pictures (GoP) are taken into account to boost the performance. The proposed CNN-based Quality Enhancement (QE) framework has been implemented on top of the Versatile Video Coding (VVC) Test Model (VTM-10). Experiments show that the prediction-aware aspect of the proposed method improves the coding efficiency gain of the default CNN-based QE method by 1.52%, in terms of BD-BR, at the same network complexity compared to the default CNN-based QE filter.

Keywords