Dianxin kexue (Feb 2022)

Video temporal perception characteristics based just noticeable difference model

  • Yafen1 XING,
  • Haibing YIN,
  • Hongkui WANG,
  • Qionghua LUO

Journal volume & issue
Vol. 38
pp. 92 – 102

Abstract

Read online

The existing temporal domain JND(just noticeable distortion) models are not sufficient to depict the interaction between temporal parameters and HVS characteristics, leading to insufficient accuracy of the spatial-temporal JND model.To solve this problem, feature parameters that can accurately describe the temporal characteristics of the video were explored and extracted, as well as a homogenization method for fusing heterogeneous feature parameters, and the temporal domain JND model based on this was improved.The feature parameters were investigated including foreground and background motion, temporal duration along the motion trajectory, residual fluctuation intensity along motion trajectory and adjacent inter-frame prediction residual, etc., which were used to characterize the temporal characteristics.Probability density functions for these feature parameters in the perception sense according to the HVS(human visual system) characteristics were proposed, and uniformly mapping the heterogeneous feature parameters to the scales of self-information and information entropy to achieve a homogeneous fusion measurement.The coupling method of visual attention and masking was explored from the perspective of energy distribution, and the temporal-domain JND weight model was constructed accordingly.On the basis of the spatial JND threshold, the temporal domain weights was integrated to develop a more accurate spatial-temporal JND model.In order to evaluate the performance of the spatiotemporal JND model, a subjective quality evaluation experiment was conducted.Experimental results justify the effectiveness of the proposed model.

Keywords