Search Results for author: Stephen Bates

Found 31 papers, 21 papers with code

Theoretical Foundations of Conformal Prediction

no code implementations18 Nov 2024 Anastasios N. Angelopoulos, Rina Foygel Barber, Stephen Bates

This book is about conformal prediction and related inferential techniques that build on permutation tests and exchangeability.

Conformal Prediction Uncertainty Quantification

Delegating Data Collection in Decentralized Machine Learning

no code implementations4 Sep 2023 Nivasini Ananthakrishnan, Stephen Bates, Michael I. Jordan, Nika Haghtalab

Motivated by the emergence of decentralized machine learning (ML) ecosystems, we study the delegation of data collection.

Incentive-Theoretic Bayesian Inference for Collaborative Science

no code implementations7 Jul 2023 Stephen Bates, Michael I. Jordan, Michael Sklar, Jake A. Soloff

We show how the principal can conduct statistical inference that leverages the information that is revealed by an agent's strategic behavior -- their choice to run a trial or not.

Bayesian Inference

Class-Conditional Conformal Prediction with Many Classes

1 code implementation NeurIPS 2023 Tiffany Ding, Anastasios N. Angelopoulos, Stephen Bates, Michael I. Jordan, Ryan J. Tibshirani

Standard conformal prediction methods provide a marginal coverage guarantee, which means that for a random test point, the conformal prediction set contains the true label with a user-specified probability.

Conformal Prediction

Operationalizing Counterfactual Metrics: Incentives, Ranking, and Information Asymmetry

no code implementations24 May 2023 Serena Wang, Stephen Bates, P. M. Aronow, Michael I. Jordan

From the social sciences to machine learning, it has been well documented that metrics to be optimized are not always aligned with social welfare.

Causal Inference counterfactual

Prediction-Powered Inference

2 code implementations23 Jan 2023 Anastasios N. Angelopoulos, Stephen Bates, Clara Fannjiang, Michael I. Jordan, Tijana Zrnic

Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.

Astronomy regression +1

The Sample Complexity of Online Contract Design

no code implementations10 Nov 2022 Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, Michael I. Jordan

This result shows that exponential-in-$m$ samples are sufficient and necessary to learn a near-optimal contract, resolving an open problem on the hardness of online contract design.

Conformal Risk Control

1 code implementation4 Aug 2022 Anastasios N. Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, Tal Schuster

We extend conformal prediction to control the expected value of any monotone loss function.

Conformal Prediction

Semantic uncertainty intervals for disentangled latent spaces

1 code implementation20 Jul 2022 Swami Sankaranarayanan, Anastasios N. Angelopoulos, Stephen Bates, Yaniv Romano, Phillip Isola

Meaningful uncertainty quantification in computer vision requires reasoning about semantic information -- say, the hair color of the person in a photo or the location of a car on the street.

Image Super-Resolution quantile regression +1

Recommendation Systems with Distribution-Free Reliability Guarantees

no code implementations4 Jul 2022 Anastasios N. Angelopoulos, Karl Krauth, Stephen Bates, Yixin Wang, Michael I. Jordan

Building from a pre-trained ranking model, we show how to return a set of items that is rigorously guaranteed to contain mostly good items.

Diversity Learning-To-Rank +1

Robust Calibration with Multi-domain Temperature Scaling

no code implementations6 Jun 2022 Yaodong Yu, Stephen Bates, Yi Ma, Michael I. Jordan

Uncertainty quantification is essential for the reliable deployment of machine learning models to high-stakes application domains.

Uncertainty Quantification

Achieving Risk Control in Online Learning Settings

1 code implementation18 May 2022 Shai Feldman, Liran Ringel, Stephen Bates, Yaniv Romano

To provide rigorous uncertainty quantification for online learning models, we develop a framework for constructing uncertainty sets that provably control risk -- such as coverage of confidence intervals, false negative rate, or F1 score -- in the online setting.

Conformal Prediction Depth Estimation +4

Principal-Agent Hypothesis Testing

no code implementations13 May 2022 Stephen Bates, Michael I. Jordan, Michael Sklar, Jake A. Soloff

The efficacy of the drug is not known to the regulator, so the pharmaceutical company must run a costly trial to prove efficacy to the regulator.

Conformal prediction for the design problem

1 code implementation8 Feb 2022 Clara Fannjiang, Stephen Bates, Anastasios N. Angelopoulos, Jennifer Listgarten, Michael I. Jordan

This is challenging because of a characteristic type of distribution shift between the training and test data in the design setting -- one in which the training and test data are statistically dependent, as the latter is chosen based on the former.

Conformal Prediction

Optimal Data Selection: An Online Distributed View

1 code implementation25 Jan 2022 Mariel Werner, Anastasios Angelopoulos, Stephen Bates, Michael I. Jordan

The blessing of ubiquitous data also comes with a curse: the communication, storage, and labeling of massive, mostly redundant datasets.

Active Learning

Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control

1 code implementation3 Oct 2021 Anastasios N. Angelopoulos, Stephen Bates, Emmanuel J. Candès, Michael I. Jordan, Lihua Lei

We introduce a framework for calibrating machine learning models so that their predictions satisfy explicit, finite-sample statistical guarantees.

BIG-bench Machine Learning Instance Segmentation +3

Calibrated Multiple-Output Quantile Regression with Representation Learning

1 code implementation2 Oct 2021 Shai Feldman, Stephen Bates, Yaniv Romano

We develop a method to generate predictive regions that cover a multivariate response variable with a user-specified probability.

Conformal Prediction quantile regression

A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification

4 code implementations15 Jul 2021 Anastasios N. Angelopoulos, Stephen Bates

Conformal prediction is a user-friendly paradigm for creating statistically rigorous uncertainty sets/intervals for the predictions of such models.

Conformal Prediction Deep Reinforcement Learning +3

Test-time Collective Prediction

no code implementations NeurIPS 2021 Celestine Mendler-Dünner, Wenshuo Guo, Stephen Bates, Michael I. Jordan

An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points.

Improving Conditional Coverage via Orthogonal Quantile Regression

1 code implementation NeurIPS 2021 Shai Feldman, Stephen Bates, Yaniv Romano

To remedy this, we modify the loss function to promote independence between the size of the intervals and the indicator of a miscoverage event.

Prediction Intervals quantile regression +1

Testing for Outliers with Conformal p-values

1 code implementation16 Apr 2021 Stephen Bates, Emmanuel Candès, Lihua Lei, Yaniv Romano, Matteo Sesia

We then introduce a new method to compute p-values that are both valid conditionally on the training data and independent of each other for different test points; this paves the way to stronger type-I error guarantees.

Outlier Detection valid

Cross-validation: what does it estimate and how well does it do it?

2 code implementations1 Apr 2021 Stephen Bates, Trevor Hastie, Robert Tibshirani

Cross-validation is a widely-used technique to estimate prediction error, but its behavior is complex and not fully understood.

Private Prediction Sets

1 code implementation11 Feb 2021 Anastasios N. Angelopoulos, Stephen Bates, Tijana Zrnic, Michael I. Jordan

Our method follows the general approach of split conformal prediction; we use holdout data to calibrate the size of the prediction sets but preserve privacy by using a privatized quantile subroutine.

Conformal Prediction Decision Making +1

Distribution-Free, Risk-Controlling Prediction Sets

3 code implementations7 Jan 2021 Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, Michael I. Jordan

While improving prediction accuracy has been the focus of machine learning in recent years, this alone does not suffice for reliable decision-making.

BIG-bench Machine Learning Classification +9

Uncertainty Sets for Image Classifiers using Conformal Prediction

5 code implementations ICLR 2021 Anastasios Angelopoulos, Stephen Bates, Jitendra Malik, Michael. I. Jordan

Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings.

Conformal Prediction Uncertainty Quantification

Achieving Equalized Odds by Resampling Sensitive Attributes

1 code implementation NeurIPS 2020 Yaniv Romano, Stephen Bates, Emmanuel J. Candès

We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness.

Attribute Fairness +3

Metropolized Knockoff Sampling

1 code implementation1 Mar 2019 Stephen Bates, Emmanuel Candès, Lucas Janson, Wenshuo Wang

Model-X knockoffs is a wrapper that transforms essentially any feature importance measure into a variable selection algorithm, which discovers true effects while rigorously controlling the expected fraction of false positives.

Methodology

Cannot find the paper you are looking for? You can Submit a new open access paper.