Complex & Intelligent Systems (Nov 2024)
Relieving popularity bias in recommendation via debiasing representation enhancement
Abstract
Abstract The interaction data used for training recommender systems often exhibit a long-tail distribution. Such highly imbalanced data distribution results in an unfair learning process among items. Contrastive learning alleviates the above issue by data augmentation. However, it lacks consideration of the significant disparity in popularity between items and may even introduce false negatives during the data augmentation, misleading user preference prediction. To address this issue, we combine contrastive learning with a weighted model for negative validation. By penalizing identified false negatives during training, we limit their potential harm within the training process. Meanwhile, to tackle the scarcity of supervision signals for unpopular items, we design Popularity Associated Modeling to mine the correlation among items. Then we guide unpopular items to learn hidden features favored by specific users from their associated popular items, which provides effective supplementary information for their representation modeling. Extensive experiments on three real-world datasets demonstrate that our proposed model outperforms state-of-the-art baselines in recommendation performance, with Recall@20 improvements of 4.2%, 2.4% and 3.6% across the datasets, but also shows significant effectiveness in relieving popularity bias.
Keywords