IEEE Access (Jan 2018)

Deep Saliency Quality Assessment Network With Joint Metric

  • Liangzhi Tang,
  • Qingbo Wu,
  • Wei Li,
  • Yinan Liu

DOI
https://doi.org/10.1109/ACCESS.2017.2776344
Journal volume & issue
Vol. 6
pp. 913 – 924

Abstract

Read online

Saliency detection aims to find the most conspicuous regions in an image, which highly catches the users' attention. High-quality saliency map plays an important role in boosting many other computer vision tasks, such as object detection and segmentation. To assess a saliency map's quality, the only way is to utilize a full reference metric, i.e., compute it with the ground-truth reference map. However, in the real-world applications, the ground-truth reference map for the saliency region is unavailable, which brings urgent demands for developing no reference saliency quality metric. In this paper, we propose a deep saliency quality assessment network (DSQAN) to predict the saliency quality scores directly from saliency maps. Furthermore, a joint metric is developed to better depict the quality of a saliency map. The proposed joint metric can not only lead better quality prediction accuracy, but also bring out more robust results. As a direct application of the proposed DSQAN, the predicted saliency quality scores are first utilized to choose the optimal saliency map from a set of saliency map candidates. The experimental results on the MSRA10K data set demonstrate that our proposed method could precisely predict the saliency quality. Particularly, when the DSQAN is applied to recommend optimal saliency map to feed an object segmentation algorithm from multiple candidates, its segmentation accuracy significantly outperforms the results outputted from the best saliency detection algorithms.

Keywords