IEEE Access (Jan 2024)
Knowledge Graph-Enhanced Hierarchical Reinforcement Learning for Interactive and Explainable Recommendation
Abstract
Recent advances in interactive recommender systems (IRSs) have sparked great interest in the exploitation of deep reinforcement learning (DRL) techniques to consider long-term user experiences. However, DRL-based IRSs tend to suffer from two common issues. The first is sample efficiency, i.e., the large action space consisting of a large number of candidate items causes that huge amount of interaction data is required to train an effective recommendation policy. The second is lacking explainability of the learned recommendation policy. To this end, we propose a method called Knowledge Graph-enhanced Hierarchical Reinforcement learning for interactive and explainable recommendation (KGHR) to harness the advantages of both DRL and knowledge graphs (KGs) to deal with the above two issues of DRL-based IRSs. Specifically, the proposed model decomposes an interactive recommendation task into a hierarchy of two level reinforcement learning tasks. The high-level agent learns to rank the candidate items sampled by the low-level agent for recommendation step by step, in which a KG is incorporated to enrich the state representation to enhance sample efficiency. The low-level agent guides user-conditional candidate item sampling, in which the same KG is exploited to couple candidate item selection and explainability by providing actual reasoning paths. Comprehensive experiments have been conducted in a simulated online environment with two real-world datasets, which demonstrate the superiority of our approach in terms of accuracy and explainability against state-of-the-art methods.
Keywords