Jisuanji kexue yu tansuo (Jan 2024)

Integrating Behavioral Dependencies into Multi-task Learning for Personalized Recommendations

  • GU Junhua, LI Ningning, WANG Xinxin, ZHANG Suqi

DOI
https://doi.org/10.3778/j.issn.1673-9418.2208098
Journal volume & issue
Vol. 18, no. 1
pp. 231 – 243

Abstract

Read online

The introduction of multiple types of behavioral data alleviates the data sparsity and cold-start problems of collaborative filtering algorithms, which is widely studied and applied in the field of recommendations. Although great progress has been made in the current research on multi-behavior recommendation, the following problems still exist: failure to comprehensively capture the complex dependencies between behaviors; ignoring the relevance of behavior features to users and items, and the recommendation results are biased. This results in the learned feature vectors failing to accurately represent the user??s interest preferences. To solve the above problems, a person-alized recommendation model (BDMR) that integrates behavioral dependencies into multi-task learning is proposed, and in this paper, the complex dependencies between behaviors are divided into feature relevance and temporal relevance. Firstly, the user personalized behavior vector is set, and multiple interaction graphs are processed with graph neural networks which combine user, item and behavior features to aggregate higher-order neighborhood information, and attention mechanism is combined to learn feature relevance among behaviors. Secondly, the interaction sequence composed of behavior features and item features is input into a long and short-term memory network to capture the temporal relevance among behaviors. Finally, personalized behavior vectors are integrated into a multi-task learning framework to obtain more accurate user, behavior and item features. To verify the perf-ormance of this model, experiments are conducted on three real datasets. On the Yelp dataset, compared with the optimal baseline, HR and NDCG are improved by 1.5% and 2.9% respectively. On the ML20M dataset, HR and NDCG are increased by 2.0% and 0.5% respectively. On the Tmall dataset, HR and NDCG are improved by 25.6% and 30.2% respectively. Experimental results show that the model proposed in this paper is superior to baselines.

Keywords