Improved Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance

ICML 2020  ·  Blair Bilodeau, Dylan Foster, Daniel Roy ·

We study the classical problem of forecasting under logarithmic loss while competing against an arbitrary class of experts. We present a novel approach to bounding the minimax regret that exploits the self-concordance property of logarithmic loss. Our regret bound depends on the metric entropy of the expert class and matches previous best known results for arbitrary expert classes. We improve the dependence on the time horizon for classes with metric entropy under the supremum norm of order $\Omega(\gamma^{-p})$ when $p>1$, which includes, for example, Lipschitz functions of dimension greater than 1.

PDF ICML 2020 PDF
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here