IEEE Access (Jan 2023)

CLIP Driven Few-Shot Panoptic Segmentation

  • Pengfei Xian,
  • Lai-Man Po,
  • Yuzhi Zhao,
  • Wing-Yin Yu,
  • Kwok-Wai Cheung

DOI
https://doi.org/10.1109/ACCESS.2023.3290070
Journal volume & issue
Vol. 11
pp. 72295 – 72305

Abstract

Read online

This paper presents CLIP Driven Few-shot Panoptic Segmentation (CLIP-FPS), a novel few-shot panoptic segmentation model that leverages the knowledge of Contrastive Language-Image Pre-training (CLIP) model. The proposed method builds upon a center indexing attention mechanism to facilitate knowledge transfer, which entails representing objects in an image as centers along with their pixel offsets. The model comprises a decoder responsible for generating object center-offset groups and a self-attention module tasked with producing a feature attention map. Subsequently, the object centers index the map to acquire the corresponding embeddings, paving the way for matrix multiplication and SoftMax operation to facilitate text embedding matching and the computation of the final panoptic segmentation masks. Quantitative evaluation on datasets such as COCO and Cityscapes shows that our method outperforms existing panoptic segmentation techniques in terms of Panoptic Quality (PQ) metrics.

Keywords