no code implementations • 13 Feb 2024 • Dan Garber, Ben Kretzu
We also present a more efficient algorithm that requires only first-order oracle access to the soft constraints and achieves similar bounds w. r. t.
no code implementations • 9 Feb 2023 • Dan Garber, Ben Kretzu
We consider the setting of online convex optimization (OCO) with \textit{exp-concave} losses.
no code implementations • 9 Feb 2022 • Dan Garber, Ben Kretzu
Concretely, when assuming the availability of a linear optimization oracle (LOO) for the feasible set, on a sequence of length $T$, our algorithms guarantee $O(T^{3/4})$ adaptive regret and $O(T^{3/4})$ adaptive expected regret, for the full-information and bandit settings, respectively, using only $O(T)$ calls to the LOO.
no code implementations • 15 Oct 2020 • Dan Garber, Ben Kretzu
We also revisit the bandit setting under strong convexity and prove a similar bound of $\tilde O(T^{2/3})$ (instead of $O(T^{3/4})$ without strong convexity).
no code implementations • 8 Oct 2019 • Dan Garber, Ben Kretzu
We revisit the challenge of designing online algorithms for the bandit convex optimization problem (BCO) which are also scalable to high dimensional problems.