Search Results for author: Pierre C. Bellec

Found 6 papers, 1 papers with code

Corrected generalized cross-validation for finite ensembles of penalized estimators

1 code implementation2 Oct 2023 Pierre C. Bellec, Jin-Hong Du, Takuya Koriyama, Pratik Patil, Kai Tan

We provide a non-asymptotic analysis of the CGCV and the two intermediate risk estimators for ensembles of convex penalized estimators under Gaussian features and a linear response model.

Out-of-sample error estimate for robust M-estimators with convex penalty

no code implementations26 Aug 2020 Pierre C. Bellec

The out-of-sample error estimate enjoys a relative error of order $n^{-1/2}$ in a linear model with Gaussian covariates and independent noise, either non-asymptotically when $p/n\le \gamma$ or asymptotically in the high-dimensional asymptotic regime $p/n\to\gamma'\in(0,\infty)$.

First order expansion of convex regularized estimators

no code implementations12 Oct 2019 Pierre C. Bellec, Arun K. Kuchibhotla

Such first order expansion implies that the risk of $\hat{\beta}$ is asymptotically the same as the risk of $\eta$ which leads to a precise characterization of the MSE of $\hat\beta$; this characterization takes a particularly simple form for isotropic design.

regression

The cost-free nature of optimally tuning Tikhonov regularizers and other ordered smoothers

no code implementations ICML 2020 Pierre C. Bellec, Dana Yang

Our theory reveals that if the Tikhonov regularizers share the same penalty matrix with different tuning parameters, a convex procedure based on $Q$-aggregation achieves the mean square error of the best estimator, up to a small error term no larger than $C\sigma^2$, where $\sigma^2$ is the noise level and $C>0$ is an absolute constant.

regression

De-Biasing The Lasso With Degrees-of-Freedom Adjustment

no code implementations24 Feb 2019 Pierre C. Bellec, Cun-Hui Zhang

This modification takes the form of a degrees-of-freedom adjustment that accounts for the dimension of the model selected by Lasso.

LEMMA

On the prediction loss of the lasso in the partially labeled setting

no code implementations20 Jun 2016 Pierre C. Bellec, Arnak S. Dalalyan, Edwin Grappin, Quentin Paris

In this paper we revisit the risk bounds of the lasso estimator in the context of transductive and semi-supervised learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.