1 code implementation • 2 Oct 2023 • Pierre C. Bellec, Jin-Hong Du, Takuya Koriyama, Pratik Patil, Kai Tan
We provide a non-asymptotic analysis of the CGCV and the two intermediate risk estimators for ensembles of convex penalized estimators under Gaussian features and a linear response model.
no code implementations • 26 Aug 2020 • Pierre C. Bellec
The out-of-sample error estimate enjoys a relative error of order $n^{-1/2}$ in a linear model with Gaussian covariates and independent noise, either non-asymptotically when $p/n\le \gamma$ or asymptotically in the high-dimensional asymptotic regime $p/n\to\gamma'\in(0,\infty)$.
no code implementations • 12 Oct 2019 • Pierre C. Bellec, Arun K. Kuchibhotla
Such first order expansion implies that the risk of $\hat{\beta}$ is asymptotically the same as the risk of $\eta$ which leads to a precise characterization of the MSE of $\hat\beta$; this characterization takes a particularly simple form for isotropic design.
no code implementations • ICML 2020 • Pierre C. Bellec, Dana Yang
Our theory reveals that if the Tikhonov regularizers share the same penalty matrix with different tuning parameters, a convex procedure based on $Q$-aggregation achieves the mean square error of the best estimator, up to a small error term no larger than $C\sigma^2$, where $\sigma^2$ is the noise level and $C>0$ is an absolute constant.
no code implementations • 24 Feb 2019 • Pierre C. Bellec, Cun-Hui Zhang
This modification takes the form of a degrees-of-freedom adjustment that accounts for the dimension of the model selected by Lasso.
no code implementations • 20 Jun 2016 • Pierre C. Bellec, Arnak S. Dalalyan, Edwin Grappin, Quentin Paris
In this paper we revisit the risk bounds of the lasso estimator in the context of transductive and semi-supervised learning.