Learning Orthogonal Projections in Linear Bandits

26 Jun 2019  ·  Qiyu Kang, Wee Peng Tay ·

In a linear stochastic bandit model, each arm is a vector in an Euclidean space and the observed return at each time step is an unknown linear function of the chosen arm at that time step. In this paper, we investigate the problem of learning the best arm in a linear stochastic bandit model, where each arm's expected reward is an unknown linear function of the projection of the arm onto a subspace. We call this the projection reward. Unlike the classical linear bandit problem in which the observed return corresponds to the reward, the projection reward at each time step is unobservable. Such a model is useful in recommendation applications where the observed return includes corruption by each individual's biases, which we wish to exclude in the learned model. In the case where there are finitely many arms, we develop a strategy to achieve $O(|\bbD|\log n)$ regret, where $n$ is the number of time steps and $|\bbD|$ is the number of arms. In the case where each arm is chosen from an infinite compact set, our strategy achieves $O(n^{2/3}(\log{n})^{1/2})$ regret. Experiments verify the efficiency of our strategy.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here