Search Results for author: Emmanuel Candes

Found 6 papers, 3 papers with code

Panning for Gold: Model-X Knockoffs for High-dimensional Controlled Variable Selection

3 code implementations7 Oct 2016 Emmanuel Candes, Yingying Fan, Lucas Janson, Jinchi Lv

Whereas the knockoffs procedure is constrained to homoscedastic linear models with $n\ge p$, the key innovation here is that model-X knockoffs provide valid inference from finite samples in settings in which the conditional distribution of the response is arbitrary and completely unknown.

Methodology Statistics Theory Applications Statistics Theory

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences

no code implementations19 Sep 2016 Yuxin Chen, Emmanuel Candes

We prove that for a broad class of statistical models, the proposed projected power method makes no error---and hence converges to the maximum likelihood estimate---in a suitable regime.

False Discoveries Occur Early on the Lasso Path

3 code implementations5 Nov 2015 Weijie Su, Malgorzata Bogdan, Emmanuel Candes

In regression settings where explanatory variables have very low correlations and there are relatively few effects, each of large magnitude, we expect the Lasso to find the important variables with few errors, if any.

SLOPE is Adaptive to Unknown Sparsity and Asymptotically Minimax

no code implementations29 Mar 2015 Weijie Su, Emmanuel Candes

We consider high-dimensional sparse regression problems in which we observe $y = X \beta + z$, where $X$ is an $n \times p$ design matrix and $z$ is an $n$-dimensional vector of independent Gaussian errors, each with variance $\sigma^2$.

Statistics Theory Information Theory Information Theory Statistics Theory

A Differential Equation for Modeling Nesterov’s Accelerated Gradient Method: Theory and Insights

no code implementations NeurIPS 2014 Weijie Su, Stephen Boyd, Emmanuel Candes

We derive a second-order ordinary differential equation (ODE), which is the limit of Nesterov’s accelerated gradient method.

Stable Principal Component Pursuit

1 code implementation14 Jan 2010 Zihan Zhou, XiaoDong Li, John Wright, Emmanuel Candes, Yi Ma

We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors.

Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.