no code implementations • 18 Nov 2024 • Anastasios N. Angelopoulos, Rina Foygel Barber, Stephen Bates
This book is about conformal prediction and related inferential techniques that build on permutation tests and exchangeability.
1 code implementation • 28 Mar 2024 • Drew T. Nguyen, Reese Pathak, Anastasios N. Angelopoulos, Stephen Bates, Michael I. Jordan
Decision-making pipelines are generally characterized by tradeoffs among various risk functions.
1 code implementation • 2 Feb 2024 • Anastasios N. Angelopoulos, Rina Foygel Barber, Stephen Bates
We introduce a method for online conformal prediction with decaying step sizes.
no code implementations • 4 Sep 2023 • Nivasini Ananthakrishnan, Stephen Bates, Michael I. Jordan, Nika Haghtalab
Motivated by the emergence of decentralized machine learning (ML) ecosystems, we study the delegation of data collection.
no code implementations • 7 Jul 2023 • Stephen Bates, Michael I. Jordan, Michael Sklar, Jake A. Soloff
We show how the principal can conduct statistical inference that leverages the information that is revealed by an agent's strategic behavior -- their choice to run a trial or not.
1 code implementation • NeurIPS 2023 • Tiffany Ding, Anastasios N. Angelopoulos, Stephen Bates, Michael I. Jordan, Ryan J. Tibshirani
Standard conformal prediction methods provide a marginal coverage guarantee, which means that for a random test point, the conformal prediction set contains the true label with a user-specified probability.
no code implementations • 24 May 2023 • Serena Wang, Stephen Bates, P. M. Aronow, Michael I. Jordan
From the social sciences to machine learning, it has been well documented that metrics to be optimized are not always aligned with social welfare.
2 code implementations • 23 Jan 2023 • Anastasios N. Angelopoulos, Stephen Bates, Clara Fannjiang, Michael I. Jordan, Tijana Zrnic
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
no code implementations • 10 Nov 2022 • Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, Michael I. Jordan
This result shows that exponential-in-$m$ samples are sufficient and necessary to learn a near-optimal contract, resolving an open problem on the hardness of online contract design.
no code implementations • 28 Sep 2022 • Bat-Sheva Einbinder, Shai Feldman, Stephen Bates, Anastasios N. Angelopoulos, Asaf Gendler, Yaniv Romano
We study the robustness of conformal prediction, a powerful tool for uncertainty quantification, to label noise.
1 code implementation • 4 Aug 2022 • Anastasios N. Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, Tal Schuster
We extend conformal prediction to control the expected value of any monotone loss function.
1 code implementation • 20 Jul 2022 • Swami Sankaranarayanan, Anastasios N. Angelopoulos, Stephen Bates, Yaniv Romano, Phillip Isola
Meaningful uncertainty quantification in computer vision requires reasoning about semantic information -- say, the hair color of the person in a photo or the location of a car on the street.
no code implementations • 4 Jul 2022 • Anastasios N. Angelopoulos, Karl Krauth, Stephen Bates, Yixin Wang, Michael I. Jordan
Building from a pre-trained ranking model, we show how to return a set of items that is rigorously guaranteed to contain mostly good items.
no code implementations • 6 Jun 2022 • Yaodong Yu, Stephen Bates, Yi Ma, Michael I. Jordan
Uncertainty quantification is essential for the reliable deployment of machine learning models to high-stakes application domains.
1 code implementation • 18 May 2022 • Shai Feldman, Liran Ringel, Stephen Bates, Yaniv Romano
To provide rigorous uncertainty quantification for online learning models, we develop a framework for constructing uncertainty sets that provably control risk -- such as coverage of confidence intervals, false negative rate, or F1 score -- in the online setting.
no code implementations • 13 May 2022 • Stephen Bates, Michael I. Jordan, Michael Sklar, Jake A. Soloff
The efficacy of the drug is not known to the regulator, so the pharmaceutical company must run a costly trial to prove efficacy to the regulator.
2 code implementations • 10 Feb 2022 • Anastasios N Angelopoulos, Amit P Kohli, Stephen Bates, Michael I Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, Yaniv Romano
Image-to-image regression is an important learning task, used frequently in biological imaging.
1 code implementation • 8 Feb 2022 • Clara Fannjiang, Stephen Bates, Anastasios N. Angelopoulos, Jennifer Listgarten, Michael I. Jordan
This is challenging because of a characteristic type of distribution shift between the training and test data in the design setting -- one in which the training and test data are statistically dependent, as the latter is chosen based on the former.
1 code implementation • 25 Jan 2022 • Mariel Werner, Anastasios Angelopoulos, Stephen Bates, Michael I. Jordan
The blessing of ubiquitous data also comes with a curse: the communication, storage, and labeling of massive, mostly redundant datasets.
1 code implementation • 3 Oct 2021 • Anastasios N. Angelopoulos, Stephen Bates, Emmanuel J. Candès, Michael I. Jordan, Lihua Lei
We introduce a framework for calibrating machine learning models so that their predictions satisfy explicit, finite-sample statistical guarantees.
1 code implementation • 2 Oct 2021 • Shai Feldman, Stephen Bates, Yaniv Romano
We develop a method to generate predictive regions that cover a multivariate response variable with a user-specified probability.
4 code implementations • 15 Jul 2021 • Anastasios N. Angelopoulos, Stephen Bates
Conformal prediction is a user-friendly paradigm for creating statistically rigorous uncertainty sets/intervals for the predictions of such models.
no code implementations • NeurIPS 2021 • Celestine Mendler-Dünner, Wenshuo Guo, Stephen Bates, Michael I. Jordan
An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points.
1 code implementation • NeurIPS 2021 • Shai Feldman, Stephen Bates, Yaniv Romano
To remedy this, we modify the loss function to promote independence between the size of the intervals and the indicator of a miscoverage event.
1 code implementation • 16 Apr 2021 • Stephen Bates, Emmanuel Candès, Lihua Lei, Yaniv Romano, Matteo Sesia
We then introduce a new method to compute p-values that are both valid conditionally on the training data and independent of each other for different test points; this paves the way to stronger type-I error guarantees.
2 code implementations • 1 Apr 2021 • Stephen Bates, Trevor Hastie, Robert Tibshirani
Cross-validation is a widely-used technique to estimate prediction error, but its behavior is complex and not fully understood.
1 code implementation • 11 Feb 2021 • Anastasios N. Angelopoulos, Stephen Bates, Tijana Zrnic, Michael I. Jordan
Our method follows the general approach of split conformal prediction; we use holdout data to calibrate the size of the prediction sets but preserve privacy by using a privatized quantile subroutine.
3 code implementations • 7 Jan 2021 • Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, Michael I. Jordan
While improving prediction accuracy has been the focus of machine learning in recent years, this alone does not suffice for reliable decision-making.
5 code implementations • ICLR 2021 • Anastasios Angelopoulos, Stephen Bates, Jitendra Malik, Michael. I. Jordan
Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings.
1 code implementation • NeurIPS 2020 • Yaniv Romano, Stephen Bates, Emmanuel J. Candès
We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness.
1 code implementation • 1 Mar 2019 • Stephen Bates, Emmanuel Candès, Lucas Janson, Wenshuo Wang
Model-X knockoffs is a wrapper that transforms essentially any feature importance measure into a variable selection algorithm, which discovers true effects while rigorously controlling the expected fraction of false positives.
Methodology