1 code implementation • 21 Jan 2022 • Xiaoyu Ma, Sylvain Sardy, Nick Hengartner, Nikolai Bobenko, Yen Ting Lin
To fit sparse linear associations, a LASSO sparsity inducing penalty with a single hyperparameter provably allows to recover the important features (needles) with high probability in certain regimes even if the sample size is smaller than the dimension of the input vector (haystack).
no code implementations • 7 Jun 2020 • Sylvain Sardy, Nicolas W Hengartner, Nikolai Bonenko, Yen Ting Lin
Using a sparsity inducing penalty in artificial neural networks (ANNs) avoids over-fitting, especially in situations where noise is high and the training set is small in comparison to the number of features.
1 code implementation • 12 May 2020 • Pascaline Descloux, Claire Boyer, Julie Josse, Aude Sportisse, Sylvain Sardy
The use of Robust Lasso-Zero is showcased for variable selection with missing values in the covariates.
1 code implementation • 14 May 2018 • Pascaline Descloux, Sylvain Sardy
The high-dimensional linear model $y = X \beta^0 + \epsilon$ is considered and the focus is put on the problem of recovering the support $S^0$ of the sparse vector $\beta^0.$ We introduce Lasso-Zero, a new $\ell_1$-based estimator whose novelty resides in an "overfit, then threshold" paradigm and the use of noise dictionaries concatenated to $X$ for overfitting the response.
no code implementations • 5 Dec 2014 • Jairo Diaz-Rodriguez, Sylvain Sardy
To estimate a sparse linear model from data with Gaussian noise, consilience from lasso and compressed sensing literatures is that thresholding estimators like lasso and the Dantzig selector have the ability in some situations to identify with high probability part of the significant covariates asymptotically, and are numerically tractable thanks to convexity.