IEEE Access (Jan 2019)
Fast Affine Motion Estimation for Versatile Video Coding (VVC) Encoding
Abstract
In this paper, we propose a fast encoding method to facilitate an affine motion estimation (AME) process in versatile video coding (VVC) encoders. The recently-launched VVC project for next-generation video coding standardization far outperforms the High Efficiency Video Coding (HEVC) standard in terms of coding efficiency. The first version of the VVC test model (VTM) displays superior coding efficiency yet requires higher encoding complexity due to advanced inter-prediction techniques of the multi-type tree (MTT) structure. In particular, an AME technique in VVC is designed to reduce temporal redundancies (other than translational motion) in dynamic motions, thus achieving more accurate motion prediction. The VTM encoder, however, requires considerable computational complexity because of the AME process in the recursive MTT partitioning. In this paper, we introduce useful features that reflect the statistical characteristics of MTT and AME and propose a method that employs these features to skip redundant AME processes. Experimental results show that-when compared to VTM 3.0-the proposed method reduces the AME time of VTM to 63% on average, while the coding loss is within 0.1% in the random-access configuration.
Keywords