IET Image Processing (Feb 2023)

A novel decoder based on Bayesian rules for task‐driven object segmentation

  • Yuxiang Cai,
  • Yuanlong Yu,
  • Weijie Jiang,
  • Rong Chen,
  • Weitao Zheng,
  • Xi Wu,
  • Renjie Su

DOI
https://doi.org/10.1049/ipr2.12676
Journal volume & issue
Vol. 17, no. 3
pp. 832 – 848

Abstract

Read online

Abstract As a challenging problem in computer vision, salient object segmentation has attracted increasing attention in recent years. Though a lot of works based on encoder–decoder have been made, these methods can only recognize and segment one class of objects, but cannot segment the other classes of objects in the same image. To address this issue, this paper proposes a novel decoder based on Bayesian rules to perform task‐driven object segmentation, in which a control signal is added to the decoder to determine which class of objects need to be segmented. What's more, a Bayesian rule is established in the decoder, in which the control signal is set as the prior, and the latent features learned in encoder is transferred to the corresponding layer of decoder as observation, thus the posterior probability of each object with respect to the specific‐class can be calculated, and the objects belonging to this class can be segmented. This proposed method is evaluated for task‐driven salient object segmentation on several benchmark datasets, including MS COCO, DUT‐OMRON, ECSSD etc. Experimental results show that the approach tends to segment accurate, detailed, and complete objects, and improves the performance compared with the previous state‐of‐the‐art.