IEEE Open Journal of Circuits and Systems (Jan 2021)
Context-Adaptive Inverse Quantization for Inter-Frame Coding
Abstract
In the hybrid video coding framework, quantization is the key technique to achieve lossy compression. The information loss caused by the quantization may be reduced to improve compression efficiency, by using either encoder-side rate-distortion optimized quantization or decoder-side filtering. Nonetheless, the existing studies did not extensively use the already encoded information, i.e., context, to reduce the quantization loss. We address this issue and propose a context-adaptive inverse quantization method, namely, CAIQ. Specifically, for inter-frame coding, we analyze the correlation between the prediction signal (generated by motion compensated prediction) and the residual signal, as well as the correlation within the residual signal itself. We then present linear as well as nonlinear yet lightweight models to exploit the observed correlations in the frequency domain. Our models provide an optional inverse quantization mode by referring to the prediction signal, which is available at the decoder side. Next, block-level mode selection regarding the CAIQ method is used at the encoder side. We integrate the proposed CAIQ method into the reference software of Versatile Video Coding. We perform an extensive study of the models and analyze their resulting compression efficiency gain and encoding/decoding complexity. Experimental results show that our CAIQ method improves compression performance especially for high-resolution videos and at high bit rates.
Keywords