IEEE Access (Jan 2020)
Blind Image Quality Assessment for Super Resolution via Optimal Feature Selection
Abstract
Methods for image Super Resolution (SR) have started to benefit from the development of perceptual quality predictors that are designed for super resolved images. However, extensive cross dataset validation studies have not yet been performed on Image Quality Assessment (IQA) for super resolved images. Moreover, powerful natural scene statistics-based approaches for IQA have not yet been studied for SR. To address these issues, we introduced a new dataset of super-resolved images with associated human quality scores. The dataset is based on the existing SupER dataset, which contains real low-resolution images. This new dataset also has 7 SR algorithms at three magnification scales. We selected optimal quality aware features to create two no-reference, (NR) opinion-distortion unaware (ODU) IQA models. Using the same set of selected features, we also implemented two NR-IQA opinion/distortion aware (ODA) models. The selection process identified paired-product (PP) features and those derived from discrete cosine transform coefficients (DCT) as the most relevant for the quality prediction of SR images. We conducted cross dataset validation for several state-of-the-art quality algorithms in four datasets, including our new dataset. The conducted experiments indicate that our models achieved better than state-of-the-art performance among the NR-IQA metrics. Our NR-IQA source code and the dataset are available at https://github.com/juanpaberon/IQA_SR.
Keywords