Search Results for author: Emmanuel Candès

Found 7 papers, 6 papers with code

A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features

no code implementations2 Mar 2024 Emi Zeger, Yifei Wang, Aaron Mishkin, Tolga Ergen, Emmanuel Candès, Mert Pilanci

We prove that training neural networks on 1-D data is equivalent to solving a convex Lasso problem with a fixed, explicitly defined dictionary matrix of features.

Bellman Conformal Inference: Calibrating Prediction Intervals For Time Series

1 code implementation7 Feb 2024 Zitong Yang, Emmanuel Candès, Lihua Lei

We introduce Bellman Conformal Inference (BCI), a framework that wraps around any time series forecasting models and provides approximately calibrated prediction intervals.

Prediction Intervals Time Series +1

Uncertainty Quantification over Graph with Conformalized Graph Neural Networks

1 code implementation NeurIPS 2023 Kexin Huang, Ying Jin, Emmanuel Candès, Jure Leskovec

We establish a permutation invariance condition that enables the validity of CP on graph data and provide an exact characterization of the test-time coverage.

Conformal Prediction Uncertainty Quantification +1

Conformal Inference for Online Prediction with Arbitrary Distribution Shifts

3 code implementations17 Aug 2022 Isaac Gibbs, Emmanuel Candès

We consider the problem of forming prediction sets in an online setting where the distribution generating the data is allowed to vary over time.

Prediction Intervals

Testing for Outliers with Conformal p-values

1 code implementation16 Apr 2021 Stephen Bates, Emmanuel Candès, Lihua Lei, Yaniv Romano, Matteo Sesia

We then introduce a new method to compute p-values that are both valid conditionally on the training data and independent of each other for different test points; this paves the way to stronger type-I error guarantees.

Outlier Detection valid

Derandomizing Knockoffs

2 code implementations4 Dec 2020 Zhimei Ren, Yuting Wei, Emmanuel Candès

Model-X knockoffs is a general procedure that can leverage any feature importance measure to produce a variable selection algorithm, which discovers true effects while rigorously controlling the number or fraction of false positives.

Feature Importance Variable Selection Methodology Applications

Metropolized Knockoff Sampling

1 code implementation1 Mar 2019 Stephen Bates, Emmanuel Candès, Lucas Janson, Wenshuo Wang

Model-X knockoffs is a wrapper that transforms essentially any feature importance measure into a variable selection algorithm, which discovers true effects while rigorously controlling the expected fraction of false positives.

Methodology

Cannot find the paper you are looking for? You can Submit a new open access paper.