Best Arm Identification in Linked Bandits

19 Nov 2018  ·  Anant Gupta ·

We consider the problem of best arm identification in a variant of multi-armed bandits called linked bandits. In a single interaction with linked bandits, multiple arms are played sequentially until one of them receives a positive reward. Since each interaction provides feedback about more than one arm, the sample complexity can be much lower than in the regular bandit setting. We propose an algorithm for linked bandits, that combines a novel subroutine to perform uniform sampling with a known optimal algorithm for regular bandits. We prove almost matching upper and lower bounds on the sample complexity of best arm identification in linked bandits. These bounds have an interesting structure, with an explicit dependence on the mean rewards of the arms, not just the gaps. We also corroborate our theoretical results with experiments.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here