IEEE Access (Jan 2018)
No-Reference Stereoscopic Image Quality Assessment Using Convolutional Neural Network for Adaptive Feature Extraction
Abstract
The pervasion of 3-D technologies over the years gives rise to the increasing demands of accurate and efficient stereoscopic image quality assessment (SIQA) methods, designed to automatically supervise and optimize 3-D image and video processing systems. Though 2-D IQA has attracted considerable attention, its 3-D counterpart is yet to be well explored. In this paper, a no-reference SIQA method using convolution neural network (CNN) for feature extraction is proposed. In the proposed method, a CNN model is trained from scratch to classify images according to their perceptual quality, with quality-aware monocular features extracted from a higher level layer of the network. Then, visual saliency models are utilized to fuse the captured monocular features. In the meanwhile, multi-scale statistical features are derived from the binocular disparity maps. Finally, the fused CNN features and the disparity features are synthesized by support vector regression into the objective quality score of the stereoscopic image. Experimental results on two public databases demonstrate the superior performance of the proposed method over other state-of-the-art methods, in terms of its accuracy in predicting stereoscopic image quality as well as its robustness across different databases and distortion types.
Keywords