IET Image Processing (Feb 2023)

Residual dense collaborative network for salient object detection

  • Yibo Han,
  • Liejun Wang,
  • Shuli Cheng,
  • Yongming Li,
  • Anyu Du

DOI
https://doi.org/10.1049/ipr2.12649
Journal volume & issue
Vol. 17, no. 2
pp. 492 – 504

Abstract

Read online

Abstract Owing to the renaissance of deep convolutional neural networks (CNN), salient object detection based on fully convolutional neural networks (FCNs) has attracted widespread attention. However, the scale variation of prominent objects, complex background features and fuzzy edges have historically been a great challenge to us. All these are closely associated with the utilization of multi‐level and multi‐scale features. At the same time, deep learning methods meet the challenges of computation and memory consumption in practice. To address these problems, the authors propose a different salient object detection method based on residuals learning and dense fusion learning framework. The proposed network is named Residual Dense Collaborative Network (RDCNet). First of all, the authors design a multi‐layer residual learning (MRL) module to extract salient object features in more detail, getting the utmost out of the object's multi‐scale and multi‐level information. Then, on the basis of the vigoroso stage‐wise convolution feature, the authors put forward the dilated convolution module (DCM) to acquire a rough global saliency map. Finally, the final accurate saliency detection map is obtained through dense cooperation learning (DCL), and the remaining learning is also used to improve gradually, so as to achieve high compactness and high‐efficiency results. Experimental results show that this method is the most advanced method for five widely used datasets (DUTS‐TE, HKU‐IS, PASCAL‐S, ECSSD, DUT‐OMRON) without any pre‐processing and post‐processing. Especially on the ECSSD dataset, the F‐measure of RDCNet achieves 95.2%.