Electronics Letters (Apr 2019)

Video object segmentation via attention‐modulating networks

  • Runfa Tang,
  • Huihui Song,
  • Kaihua Zhang,
  • Sihao Jiang

DOI
https://doi.org/10.1049/el.2019.0304
Journal volume & issue
Vol. 55, no. 8
pp. 455 – 457

Abstract

Read online

This Letter presents an attention‐modulating network for video object segmentation that can well adapt its segmentation model to the annotated frame. Specifically, the authors first develop an efficient visual and spatial attention modulator to fast modulate the segmentation model to focus on the specific object of interest. Then they design a channel and spatial attention module and inject it into the segmentation model to further refine its feature maps. In addition, to fuse multi‐scale context information, they construct a feature pyramid attention module to further process the top layer feature maps, achieving better pixel‐level attention for the high‐level feature maps. Finally, to address the sample imbalance issue in training, they employ focal loss that can distinguish simple samples from the difficult ones to accelerate the convergence of network training. Extensive evaluations on DAVIS2017 dataset show that the proposed approach has achieved state‐of‐the‐art performance, outperforming the baseline OSMN by 3.6 and 5.4% in terms of IoU and F‐measure without fine‐tuning.

Keywords