IEEE Access (Jan 2021)

Visualization of Salient Object With Saliency Maps Using Residual Neural Networks

  • Rubina Rashid,
  • Saqib Ubaid,
  • Muhammad Idrees,
  • Rida Rafi,
  • Imran Sarwar Bajwa

DOI
https://doi.org/10.1109/ACCESS.2021.3100155
Journal volume & issue
Vol. 9
pp. 104626 – 104635

Abstract

Read online

Visual saliency techniques based on Convolutional Neural Networks (CNNs) exhibit an excessive performance for saliency fixation in a scene, but it is harder to train a network in view of their complexity. The imparting Residual Network Model (ResNet) that is more capable to optimize features for predicting salient area in the form of saliency maps within the images. To get saliency maps, an amalgamated framework is presented that contains two streams of Residual Network Model (ResNet-50). Each stream of Reset-50 that is used to enhance the low-level and high-level semantics features and build a network of 99 layers at two different image scales for generating the normal saliency attention. This model is trained with transfer learning for initialization that is pretrained on ImageNet for object detection, and with some modifications to minimize prediction error. At the end, the two streams integrate the features by fusion at low and high scale dimensions of images. This model is fine-tuned on four commonly used datasets and examines both qualitative and quantitative evaluation metrics for state-of-the-art deep saliency model outcomes.

Keywords