Journal of Control Science and Engineering (Jan 2015)

Combining Multiple Strategies for Multiarmed Bandit Problems and Asymptotic Optimality

  • Hyeong Soo Chang,
  • Sanghee Choe

DOI
https://doi.org/10.1155/2015/264953
Journal volume & issue
Vol. 2015

Abstract

Read online

This brief paper provides a simple algorithm that selects a strategy at each time in a given set of multiple strategies for stochastic multiarmed bandit problems, thereby playing the arm by the chosen strategy at each time. The algorithm follows the idea of the probabilistic ϵt-switching in the ϵt-greedy strategy and is asymptotically optimal in the sense that the selected strategy converges to the best in the set under some conditions on the strategies in the set and the sequence of {ϵt}.