Search Results for author: Lijia Zhou

Found 6 papers, 1 papers with code

An Agnostic View on the Cost of Overfitting in (Kernel) Ridge Regression

no code implementations22 Jun 2023 Lijia Zhou, James B. Simon, Gal Vardi, Nathan Srebro

We study the cost of overfitting in noisy kernel ridge regression (KRR), which we define as the ratio between the test error of the interpolating ridgeless model and the test error of the optimally-tuned model.

regression

A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models

1 code implementation21 Oct 2022 Lijia Zhou, Frederic Koehler, Pragya Sur, Danica J. Sutherland, Nathan Srebro

We prove a new generalization bound that shows for any class of linear predictors in Gaussian space, the Rademacher complexity of the class and the training error under any continuous loss $\ell$ can control the test error under all Moreau envelopes of the loss $\ell$.

LEMMA

Optimistic Rates: A Unifying Theory for Interpolation Learning and Regularization in Linear Regression

no code implementations8 Dec 2021 Lijia Zhou, Frederic Koehler, Danica J. Sutherland, Nathan Srebro

We study a localized notion of uniform convergence known as an "optimistic rate" (Panchenko 2002; Srebro et al. 2010) for linear regression with Gaussian data.

regression

Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds, and Benign Overfitting

no code implementations NeurIPS 2021 Frederic Koehler, Lijia Zhou, Danica J. Sutherland, Nathan Srebro

We consider interpolation learning in high-dimensional linear regression with Gaussian data, and prove a generic uniform convergence guarantee on the generalization error of interpolators in an arbitrary hypothesis class in terms of the class's Gaussian width.

Generalization Bounds regression

Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds and Benign Overfitting

no code implementations NeurIPS 2021 Frederic Koehler, Lijia Zhou, Danica J. Sutherland, Nathan Srebro

We consider interpolation learning in high-dimensional linear regression with Gaussian data, and prove a generic uniform convergence guarantee on the generalization error of interpolators in an arbitrary hypothesis class in terms of the class’s Gaussian width.

Generalization Bounds regression

On Uniform Convergence and Low-Norm Interpolation Learning

no code implementations NeurIPS 2020 Lijia Zhou, Danica J. Sutherland, Nathan Srebro

But we argue we can explain the consistency of the minimal-norm interpolator with a slightly weaker, yet standard, notion: uniform convergence of zero-error predictors in a norm ball.

Cannot find the paper you are looking for? You can Submit a new open access paper.