IEEE Access (Jan 2023)

Multi-Scale Feature Enhancement for Saliency Object Detection Algorithm

  • Su Li,
  • Rugang Wang,
  • Feng Zhou,
  • Yuanyuan Wang,
  • Naihong Guo

DOI
https://doi.org/10.1109/ACCESS.2023.3317901
Journal volume & issue
Vol. 11
pp. 103511 – 103520

Abstract

Read online

Aimed at existing saliency object detection models with problems of front and back view misclassification and edge blur, this study proposes an algorithm with multi-scale feature enhancement. In this algorithm, the feature maps of salient objects are extracted using VGG16. Multi-scale Feature Fusion Module is added to enhance the detailed information of the second feature layer and the semantic information of the fifth feature layer, which effectively improves the characterization ability of the second feature layer on the edges of salient objects and the fifth feature layer on salient objects. Simultaneously, Feature Enhancement Fusion Module is added to achieve the full fusion of local detail information and global semantic information through layer-by-layer fusion from deep to shallow, which is used to obtain a feature map with complete feature information. Finally, a complete prediction map with clear edges is obtained by training the network model. The performance of the proposed algorithm is compared with six algorithms, Amulet, R3Net, PoolNet, MINet, PurNet, and NSAL, on the HKU-IS, ECSSD, DUT-OMRON, and DUTS-TE datasets. MAE (Mean Absolute Error) values were decreased by 0.011, 0.009, 0, −0.001, 0.001, 0.003. F-measure were improved by 0.037, 0.019, 0.013, 0.017, 0.015, 0.09. E-measure were improved by: null, −0.008, 0.003, 0.005, −0.014, 0.047. S-measure were improved by: 0.073, 0.041, 0.016, 0.021, 0.016, 0.101. Compared with existing algorithms, the proposed algorithm can obtain better detection results and accurately identify all regions of significant objects.

Keywords