An Approximate Dynamic Programming Approach to Adversarial Online Learning

16 Mar 2016  ·  Vijay Kamble, Patrick Loiseau, Jean Walrand ·

We describe an approximate dynamic programming (ADP) approach to compute approximations of the optimal strategies and of the minimal losses that can be guaranteed in discounted repeated games with vector-valued losses. Such games prominently arise in the analysis of regret in repeated decision-making in adversarial environments, also known as adversarial online learning. At the core of our approach is a characterization of the lower Pareto frontier of the set of expected losses that a player can guarantee in these games as the unique fixed point of a set-valued dynamic programming operator. When applied to the problem of regret minimization with discounted losses, our approach yields algorithms that achieve markedly improved performance bounds compared to off-the-shelf online learning algorithms like Hedge. These results thus suggest the significant potential of ADP-based approaches in adversarial online learning.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here