no code implementations • 5 Mar 2024 • Amit Attia, Tomer Koren
This short note describes a simple technique for analyzing probabilistic algorithms that rely on a light-tailed (but not necessarily bounded) source of randomization.
no code implementations • 5 Feb 2024 • Amit Attia, Tomer Koren
We study the problem of parameter-free stochastic optimization, inquiring whether, and under what conditions, do fully parameter-free methods exist: these are methods that achieve convergence rates competitive with optimally tuned methods, without requiring significant knowledge of the true problem parameters.
no code implementations • 17 Feb 2023 • Amit Attia, Tomer Koren
We study Stochastic Gradient Descent with AdaGrad stepsizes: a popular adaptive (self-tuning) method for first-order stochastic optimization.
no code implementations • 17 Jul 2022 • Amit Attia, Tomer Koren
We consider the problem of designing uniformly stable first-order optimization algorithms for empirical risk minimization.
no code implementations • NeurIPS 2021 • Amit Attia, Tomer Koren
For convex quadratic objectives, Chen et al. (2018) proved that the uniform stability of the method grows quadratically with the number of optimization steps, and conjectured that the same is true for the general convex and smooth case.