no code implementations • 7 Jun 2022 • Idan Amir, Guy Azov, Tomer Koren, Roi Livni
We study best-of-both-worlds algorithms for bandits with switching cost, recently addressed by Rouyer, Seldin and Cesa-Bianchi, 2021.
no code implementations • 27 Feb 2022 • Idan Amir, Roi Livni, Nathan Srebro
We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i. e.~where each instantaneous loss is a scalar convex function of a linear function.
no code implementations • NeurIPS 2021 • Idan Amir, Yair Carmon, Tomer Koren, Roi Livni
We study the generalization performance of $\text{full-batch}$ optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants.
no code implementations • 1 Feb 2021 • Idan Amir, Tomer Koren, Roi Livni
We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model.
no code implementations • NeurIPS 2020 • Idan Amir, Idan Attias, Tomer Koren, Roi Livni, Yishay Mansour
We revisit the fundamental problem of prediction with expert advice, in a setting where the environment is benign and generates losses stochastically, but the feedback observed by the learner is subject to a moderate adversarial corruption.