A Simple and Optimal Policy Design with Safety against Heavy-tailed Risk for Stochastic Bandits

7 Jun 2022  ·  David Simchi-Levi, Zeyu Zheng, Feng Zhu ·

We study the stochastic multi-armed bandit problem and design new policies that enjoy both worst-case optimality for expected regret and light-tailed risk for regret distribution. Starting from the two-armed bandit setting with time horizon $T$, we propose a simple policy and prove that the policy (i) enjoys the worst-case optimality for the expected regret at order $O(\sqrt{T\ln T})$ and (ii) has the worst-case tail probability of incurring a linear regret decay at an exponential rate $\exp(-\Omega(\sqrt{T}))$, a rate that we prove to be best achievable for all worst-case optimal policies. Briefly, our proposed policy achieves a delicate balance between doing more exploration at the beginning of the time horizon and doing more exploitation when approaching the end, compared to the standard Successive Elimination policy and Upper Confidence Bound policy. We then improve the policy design and analysis to work for the general $K$-armed bandit setting. Specifically, the worst-case probability of incurring a regret larger than any $x>0$ is upper bounded by $\exp(-\Omega(x/\sqrt{KT}))$. We then enhance the policy design to accommodate the "any-time" setting where $T$ is not known a priori, and prove equivalently desired policy performances as compared to the "fixed-time" setting with known $T$. A brief account of numerical experiments is conducted to illustrate the theoretical findings. We conclude by extending our proposed policy design to the general stochastic linear bandit setting and proving that the policy leads to both worst-case optimality in terms of expected regret order and light-tailed risk on the regret distribution.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here