Influence Diagram Bandits

We propose a novel framework for structured bandits, which we call influence diagram bandit. Our framework captures complicated statistical dependencies between actions, latent variables, and observations; and unifies and extends many existing models, such as combinatorial semi-bandits, cascading bandits, and low-rank bandits. We develop novel online learning algorithms that allow us to act efficiently in our models. The key idea is to track a structured posterior distribution of model parameters, either exactly or approximately. To act, we sample model parameters from their posterior and then use the structure of the influence diagram to find the most optimistic actions under the sampled parameters. We experiment with three structured bandit problems: cascading bandits, online learning to rank in the position-based model, and rank-1 bandits. Our algorithms achieve up to about 3 times higher cumulative reward than baselines.

PDF ICML 2020 PDF
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here