Search Results for author: Matan Schliserman

Found 2 papers, 0 papers with code

The Dimension Strikes Back with Gradients: Generalization of Gradient Methods in Stochastic Convex Optimization

no code implementations22 Jan 2024 Matan Schliserman, Uri Sherman, Tomer Koren

Our bound translates to a lower bound of $\Omega (\sqrt{d})$ on the number of training examples required for standard GD to reach a non-trivial test error, answering an open question raised by Feldman (2016) and Amir, Koren, and Livni (2021b) and showing that a non-trivial dimension dependence is unavoidable.

Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond

no code implementations27 Feb 2022 Matan Schliserman, Tomer Koren

Finally, as direct applications of the general bounds, we return to the setting of linear classification with separable data and establish several novel test loss and test accuracy bounds for gradient descent and stochastic gradient descent for a variety of loss functions with different tail decay rates.

Generalization Bounds

Cannot find the paper you are looking for? You can Submit a new open access paper.