no code implementations • 2 Mar 2024 • Emi Zeger, Yifei Wang, Aaron Mishkin, Tolga Ergen, Emmanuel Candès, Mert Pilanci
We prove that training neural networks on 1-D data is equivalent to solving a convex Lasso problem with a fixed, explicitly defined dictionary matrix of features.
1 code implementation • 7 Feb 2024 • Zitong Yang, Emmanuel Candès, Lihua Lei
We introduce Bellman Conformal Inference (BCI), a framework that wraps around any time series forecasting models and provides approximately calibrated prediction intervals.
1 code implementation • NeurIPS 2023 • Kexin Huang, Ying Jin, Emmanuel Candès, Jure Leskovec
We establish a permutation invariance condition that enables the validity of CP on graph data and provide an exact characterization of the test-time coverage.
3 code implementations • 17 Aug 2022 • Isaac Gibbs, Emmanuel Candès
We consider the problem of forming prediction sets in an online setting where the distribution generating the data is allowed to vary over time.
1 code implementation • 16 Apr 2021 • Stephen Bates, Emmanuel Candès, Lihua Lei, Yaniv Romano, Matteo Sesia
We then introduce a new method to compute p-values that are both valid conditionally on the training data and independent of each other for different test points; this paves the way to stronger type-I error guarantees.
2 code implementations • 4 Dec 2020 • Zhimei Ren, Yuting Wei, Emmanuel Candès
Model-X knockoffs is a general procedure that can leverage any feature importance measure to produce a variable selection algorithm, which discovers true effects while rigorously controlling the number or fraction of false positives.
Feature Importance Variable Selection Methodology Applications
1 code implementation • 1 Mar 2019 • Stephen Bates, Emmanuel Candès, Lucas Janson, Wenshuo Wang
Model-X knockoffs is a wrapper that transforms essentially any feature importance measure into a variable selection algorithm, which discovers true effects while rigorously controlling the expected fraction of false positives.
Methodology