Entropy (Jan 2024)

Lightweight Cross-Modal Information Mutual Reinforcement Network for RGB-T Salient Object Detection

  • Chengtao Lv,
  • Bin Wan,
  • Xiaofei Zhou,
  • Yaoqi Sun,
  • Jiyong Zhang,
  • Chenggang Yan

DOI
https://doi.org/10.3390/e26020130
Journal volume & issue
Vol. 26, no. 2
p. 130

Abstract

Read online

RGB-T salient object detection (SOD) has made significant progress in recent years. However, most existing works are based on heavy models, which are not applicable to mobile devices. Additionally, there is still room for improvement in the design of cross-modal feature fusion and cross-level feature fusion. To address these issues, we propose a lightweight cross-modal information mutual reinforcement network for RGB-T SOD. Our network consists of a lightweight encoder, the cross-modal information mutual reinforcement (CMIMR) module, and the semantic-information-guided fusion (SIGF) module. To reduce the computational cost and the number of parameters, we employ the lightweight module in both the encoder and decoder. Furthermore, to fuse the complementary information between two-modal features, we design the CMIMR module to enhance the two-modal features. This module effectively refines the two-modal features by absorbing previous-level semantic information and inter-modal complementary information. In addition, to fuse the cross-level feature and detect multiscale salient objects, we design the SIGF module, which effectively suppresses the background noisy information in low-level features and extracts multiscale information. We conduct extensive experiments on three RGB-T datasets, and our method achieves competitive performance compared to the other 15 state-of-the-art methods.

Keywords