no code implementations • 24 Oct 2019 • Sangkyun Lee, Piotr Sobczyk, Malgorzata Bogdan
Adapting SL1 for probabilistic graphical models, we show that SL1 can be used for the structure learning of Gaussian MRFs using our suggested procedure nsSLOPE (neighborhood selection Sorted L-One Penalized Estimation), controlling the FDR of detecting edges.
3 code implementations • 14 Sep 2019 • Wei Jiang, Malgorzata Bogdan, Julie Josse, Blazej Miasojedow, Veronika Rockova, Traumabase group
We consider the problem of variable selection in high-dimensional settings with missing observations among the covariates.
Methodology Applications Computation
no code implementations • 7 Dec 2017 • Alain Virouleau, Agathe Guilloux, Stéphane Gaïffas, Malgorzata Bogdan
Following a recent set of works providing methods for simultaneous robust regression and outliers detection, we consider in this paper a model of linear regression with individual intercepts, in a high-dimensional setting.
no code implementations • 6 Oct 2017 • Philipp J. Kremer, Sangkyun Lee, Malgorzata Bogdan, Sandra Paterlini
We introduce a financial portfolio optimization framework that allows us to automatically select the relevant assets and estimate their weights by relying on a sorted $\ell_1$-Norm penalization, henceforth SLOPE.
1 code implementation • 17 Oct 2016 • Damian Brzyski, Alexej Gossmann, Weijie Su, Malgorzata Bogdan
Sorted L-One Penalized Estimation (SLOPE) is a relatively new convex optimization procedure which allows for adaptive selection of regressors under sparse high dimensional designs.
Methodology 46N10 G.1.6
no code implementations • 18 Nov 2015 • Sangkyun Lee, Damian Brzyski, Malgorzata Bogdan
In this paper we propose a primal-dual proximal extragradient algorithm to solve the generalized Dantzig selector (GDS) estimation problem, based on a new convex-concave saddle-point (SP) reformulation.
3 code implementations • 5 Nov 2015 • Weijie Su, Malgorzata Bogdan, Emmanuel Candes
In regression settings where explanatory variables have very low correlations and there are relatively few effects, each of large magnitude, we expect the Lasso to find the important variables with few errors, if any.