Regret Bounds and Reinforcement Learning Exploration of EXP-based Algorithms

20 Sep 2020  ·  Mengfan Xu, Diego Klabjan ·

EXP-based algorithms are often used for exploration in non-stochastic bandit problems assuming rewards are bounded. We propose a new algorithm, namely EXP4.P, by modifying EXP4 and establish its upper bound of regret in both bounded and unbounded sub-Gaussian contextual bandit settings. The unbounded reward result also holds for a revised version of EXP3.P. Moreover, we provide a lower bound on regret that suggests no sublinear regret can be achieved given short time horizon. All the analyses do not require bounded rewards compared to classical ones. We also extend EXP4.P from contextual bandit to reinforcement learning to incentivize exploration by multiple agents given black-box rewards. The resulting algorithm has been tested on hard-to-explore games and it shows an improvement on exploration compared to state-of-the-art.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here