The Online Saddle Point Problem and Online Convex Optimization with Knapsacks

21 Jun 2018  ·  Adrian Rivera, He Wang, Huan Xu ·

We study the online saddle point problem, an online learning problem where at each iteration a pair of actions need to be chosen without knowledge of the current and future (convex-concave) payoff functions. The objective is to minimize the gap between the cumulative payoffs and the saddle point value of the aggregate payoff function, which we measure using a metric called "SP-Regret". The problem generalizes the online convex optimization framework but here we must ensure both players incur cumulative payoffs close to that of the Nash equilibrium of the sum of the games. We propose an algorithm that achieves SP-Regret proportional to $\sqrt{\ln(T)T}$ in the general case, and $\log(T)$ SP-Regret for the strongly convex-concave case. We also consider the special case where the payoff functions are bilinear and the decision sets are the probability simplex. In this setting we are able to design algorithms that reduce the bounds on SP-Regret from a linear dependence in the dimension of the problem to a \textit{logarithmic} one. We also study the problem under bandit feedback and provide an algorithm that achieves sublinear SP-Regret. We then consider an online convex optimization with knapsacks problem motivated by a wide variety of applications such as: dynamic pricing, auctions, and crowdsourcing. We relate this problem to the online saddle point problem and establish $O(\sqrt{T})$ regret using a primal-dual algorithm.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here