PeerJ Computer Science (Dec 2024)

BSEFNet: bidirectional self-attention edge fusion network salient object detection based on deep fusion of edge features

  • Gan Gao,
  • Yuanyuan Wang,
  • Feng Zhou,
  • Shuaiting Chen,
  • Xiaole Ge,
  • Rugang Wang

DOI
https://doi.org/10.7717/peerj-cs.2494
Journal volume & issue
Vol. 10
p. e2494

Abstract

Read online Read online

Salient object detection aims to identify the most prominent objects within an image. With the advent of fully convolutional networks (FCNs), deep learning-based saliency detection models have increasingly leveraged FCNs for pixel-level saliency prediction. However, many existing algorithms face challenges in accurately delineating target boundaries, primarily due to insufficient utilization of edge information. To address this issue, we propose a novel approach to improve the boundary accuracy of salient target detection by integrating salient target and edge information. Our approach comprises two key components: a Self-attentive Group Pixel Fusion module (SGPFM) and a Bidirectional Feature Fusion module (BFF). The SGPFM extracts salient edge features from the lower layers of ResNet50 and salient target features from the higher layers. These features are then optimized using a self-attentive mechanism. The BFF module progressively fuses the salient target and edge features, optimizing them based on their logical relationships and enhancing the complementarities among the features. By combining detailed edge information and positional target information, our method significantly enhances the detection accuracy of target boundaries. Experimental results demonstrate that the proposed model outperforms the latest existing methods across four benchmark datasets, providing accurate and detail-rich salient target predictions. This advancement marks a significant contribution to the development of the field.

Keywords