Systems (Jan 2025)

Multimodal Recommendation System Based on Cross Self-Attention Fusion

  • Peishan Li,
  • Weixiao Zhan,
  • Lutao Gao,
  • Shuran Wang,
  • Linnan Yang

DOI
https://doi.org/10.3390/systems13010057
Journal volume & issue
Vol. 13, no. 1
p. 57

Abstract

Read online

Recent advances in graph neural networks (GNNs) have enhanced multimodal recommendation systems’ ability to process complex user–item interactions. However, current approaches face two key limitations: they rely on static similarity metrics for product relationship graphs and they struggle to effectively fuse information across modalities. We propose MR-CSAF, a novel multimodal recommendation algorithm using cross-self-attention fusion. Building on FREEDOM, our approach introduces an adaptive modality selector that dynamically weights each modality’s contribution to product similarity, enabling more accurate product relationship graphs and optimized modality representations. We employ a cross-self-attention mechanism to facilitate both inter- and intra-modal information transfer, while using graph convolution to incorporate updated features into item and product modal representations. Experimental results on three public datasets demonstrate MR-CSAF outperforms eight baseline methods, validating its effectiveness in providing personalized recommendations, advancing the field of personalized recommendation in complex multimodal environments.

Keywords