no code implementations • 8 Jan 2022 • Daron Anderson, George Iosifidis, Douglas J. Leith
We consider the general problem of online convex optimization with time-varying additive constraints in the presence of predictions for the next cost and constraint functions.
no code implementations • NeurIPS 2021 • Daron Anderson, Douglas Leith
We study Online Lazy Gradient Descent for optimisation on a strongly convex domain.
no code implementations • 3 Apr 2020 • Daron Anderson, Douglas Leith
We prove the familiar Lazy Online Gradient Descent algorithm is universal on polytope domains.
no code implementations • 11 Nov 2019 • Daron Anderson, Douglas J. Leith
We consider online learning problems where the aim is to achieve regret which is efficient in the sense that it is the same order as the lowest regret amongst K experts.
no code implementations • 10 Sep 2019 • Daron Anderson, Douglas Leith
We show that the Subgradient algorithm is universal for online learning on the simplex in the sense that it simultaneously achieves $O(\sqrt N)$ regret for adversarial costs and $O(1)$ pseudo-regret for i. i. d costs.