IEEE Access (Jan 2022)

AVQBits—Adaptive Video Quality Model Based on Bitstream Information for Various Video Applications

  • Rakesh Rao Ramachandra Rao,
  • Steve Goring,
  • Alexander Raake

DOI
https://doi.org/10.1109/ACCESS.2022.3195527
Journal volume & issue
Vol. 10
pp. 80321 – 80351

Abstract

Read online

The paper presents $AVQBits$ , a versatile, bitstream-based video quality model. It can be applied in several contexts such as video service monitoring, evaluation of video encoding quality, of gaming video QoE, and even of omnidirectional video quality. In the paper, it is shown that $AVQBits$ predictions closely match video quality ratings obained in various subjective tests with human viewers, for videos up to 4K-UHD resolution (Ultra-High Definition, 3840 x 2180 pixels) and framerates up 120 fps. With the different variants of $AVQBits$ presented in the paper, video quality can be monitored either at the client side, in the network or directly after encoding. The no-reference $AVQBits$ model was developed for different video services and types of input data, reflecting the increasing popularity of Video-on-Demand services and widespread use of HTTP-based adaptive streaming. At its core, $AVQBits$ encompasses the standardized ITU-T P.1204.3 model, with further model instances that can either have restricted or extended input information, depending on the application context. Four different instances of $AVQBits$ are presented, that is, a Mode 3 model with full access to the bitstream, a Mode 0 variant using only metadata such as codec type, framerate, resoution and bitrate as input, a Mode 1 model using Mode 0 information and frame-type and -size information, and a Hybrid Mode 0 model that is based on Mode 0 metadata and the decoded video pixel information. The models are trained on the authors’ own AVT-PNATS-UHD-1 dataset described in the paper. All models show a highly competitive performance by using AVT-VQDB-UHD-1 as validation dataset, e.g., with the Mode 0 variant yielding a value of 0.890 Pearson Correlation, the Mode 1 model of 0.901, the hybrid no-reference mode 0 model of 0.928 and the model with full bitstream access of 0.942. In addition, all four $AVQBits$ variants are evaluated when applying them out-of-the-box to different media formats such as 360° video, high framerate (HFR) content, or gaming videos. The analysis shows that the ITU-T P.1204.3 and Hybrid Mode 0 instances of $AVQBits$ for the considered use-cases either perform on par with or better than even state-of-the-art full reference, pixel-based models. Furthermore, it is shown that the proposed Mode 0 and Mode 1 variants outperform commonly used no-reference models for the different application scopes. Also, a long-term integration model based on the standardized ITU-T P.1203.3 is presented to estimate ratings of overall audiovisual streaming Quality of Experience (QoE) for sessions of 30 s up to 5 min duration. In the paper, the $AVQBits$ instances with their per-1-sec score output are evaluated as the video quality component of the proposed long-term integration model. All $AVQBits$ variants as well as the long-term integration module are made publicly available for the community for further research.

Keywords