The Restless Hidden Markov Bandit with Linear Rewards and Side Information

22 Oct 2019  ·  Michal Yemini, Amir Leshem, Anelia Somekh-Baruch ·

In this paper we present a model for the hidden Markovian bandit problem with linear rewards. As opposed to current work on Markovian bandits, we do not assume that the state is known to the decision maker before making the decision. Furthermore, we assume structural side information where the decision maker knows in advance that there are two types of hidden states; one is common to all arms and evolves according to a Markovian distribution, and the other is unique to each arm and is distributed according to an i.i.d. process that is unique to each arm. We present an algorithm and regret analysis to this problem. Surprisingly, we can recover the hidden states and maintain logarithmic regret in the case of a convex polytope action set. Furthermore, we show that the structural side information leads to expected regret that does not depend on the number of extreme points in the action space. Therefore, we obtain practical solutions even in high dimensional problems.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here