IEEE Access (Jan 2024)

RLISR: A Deep Reinforcement Learning Based Interactive Service Recommendation Model

  • Mingwei Zhang,
  • Yingjie Qu,
  • Yage Li,
  • Xingyu Wen,
  • Yi Zhou

DOI
https://doi.org/10.1109/ACCESS.2024.3420395
Journal volume & issue
Vol. 12
pp. 90204 – 90217

Abstract

Read online

An increasing number of services are being offered online, which leads to great difficulties in selecting appropriate services during mashup development. There have been many service recommendation studies and achieved remarkable results to alleviate the issue of service selection challenge. However, they are limited to suggesting services only for a single round or the next round, and ignore the interactive nature in real-world service recommendation scenarios. As a result, existing methods can’t capture developers’ shifting requirements and obtain the long-term optimal recommendation performance over the whole recommendation process. In this paper, we propose a deep reinforcement learning based interactive service recommendation model (RLISR) to tackle this problem. Specifically, we formulate interaction service recommendation as a multi-round decision-making process, and design a reinforcement learning framework to enable the interactions between mashup developers and service recommender systems. First, we propose a knowledge-graph-based state representation modeling method, wherein we consider both the positive and negative feedbacks of developers. Then, we design an informative reward function from the perspective of boosting recommendation accuracy and reducing the number of recommendation rounds. Finally, we adopt a cascading Q-networks model to cope with the enormous combinational candidate space and learn an optimal recommendation policy. Extensive experiments conducted on a real-world dataset validate the effectiveness of the proposed approach compared to the state-of-the-art service recommendation approaches.

Keywords