IEEE Open Journal of Signal Processing (Jan 2024)

MMSFormer: Multimodal Transformer for Material and Semantic Segmentation

  • Md Kaykobad Reza,
  • Ashley Prater-Bennette,
  • M. Salman Asif

DOI
https://doi.org/10.1109/OJSP.2024.3389812
Journal volume & issue
Vol. 5
pp. 599 – 610

Abstract

Read online

Leveraging information across diverse modalities is known to enhance performance on multimodal segmentation tasks. However, effectively fusing information from different modalities remains challenging due to the unique characteristics of each modality. In this paper, we propose a novel fusion strategy that can effectively fuse information from different modality combinations. We also propose a new model named Multi-Modal Segmentation TransFormer (MMSFormer) that incorporates the proposed fusion strategy to perform multimodal material and semantic segmentation tasks. MMSFormer outperforms current state-of-the-art models on three different datasets. As we begin with only one input modality, performance improves progressively as additional modalities are incorporated, showcasing the effectiveness of the fusion block in combining useful information from diverse input modalities. Ablation studies show that different modules in the fusion block are crucial for overall model performance. Furthermore, our ablation studies also highlight the capacity of different input modalities to improve performance in the identification of different types of materials.

Keywords