IEEE Access (Jan 2021)

Conservative Contextual Combinatorial Cascading Bandit

  • Kun Wang

DOI
https://doi.org/10.1109/ACCESS.2021.3124416
Journal volume & issue
Vol. 9
pp. 151434 – 151443

Abstract

Read online

Contextual combinatorial cascading bandit ( $C^{3}$ -bandit) is a powerful multi-armed bandit framework that balances the tradeoff between exploration and exploitation in the learning process. It well captures users’ click behavior and has been applied in a broad spectrum of real-world applications such as recommender systems and search engines. However, such a framework does not provide a performance guarantee of the initial exploration phase. To that end, we propose conservative contextual combinatorial cascading bandit ( $C^{4}$ -bandit) model, aiming to address the aforementioned crucial modeling issues. In this problem, the learning agent is given some contexts and recommends a list of items not worse than the baseline strategy, and then observes the reward by some stopping rule. The objective is now to maximize the reward while simultaneously satisfying the safety constraint, i.e. guaranteeing the algorithm to perform at least as well as a baseline strategy. To tackle this new problem, we extend an online learning algorithm, called Upper Confidence Bound (UCB), to deal with a critical tradeoff between exploitation and exploration and employ the conservative mechanism to properly handle the safety constraints. By carefully integrating these two techniques, we develop a new algorithm, called $C^{4}$ -UCB for this problem. Further, we rigorously prove the n-step upper bound in two situations: known baseline reward and unknown baseline reward. The regret in both situations is only enlarged by an additive constant term compared to results of $C^{3}$ -bandit. Finally, experiments on synthetic and realistic datasets demonstrate its advantages.

Keywords