Sensors (Jan 2023)

Enhancing Mask Transformer with Auxiliary Convolution Layers for Semantic Segmentation

  • Zhengyu Xia,
  • Joohee Kim

DOI
https://doi.org/10.3390/s23020581
Journal volume & issue
Vol. 23, no. 2
p. 581

Abstract

Read online

Transformer-based semantic segmentation methods have achieved excellent performance in recent years. Mask2Former is one of the well-known transformer-based methods which unifies common image segmentation into a universal model. However, it performs relatively poorly in obtaining local features and segmenting small objects due to relying heavily on transformers. To this end, we propose a simple yet effective architecture that introduces auxiliary branches to Mask2Former during training to capture dense local features on the encoder side. The obtained features help improve the performance of learning local information and segmenting small objects. Since the proposed auxiliary convolution layers are required only for training and can be removed during inference, the performance gain can be obtained without additional computation at inference. Experimental results show that our model can achieve state-of-the-art performance (57.6% mIoU) on the ADE20K and (84.8% mIoU) on the Cityscapes datasets.

Keywords