You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 4 Sep 2023 • Nivasini Ananthakrishnan, Stephen Bates, Michael I. Jordan, Nika Haghtalab

To address the lack of a priori knowledge regarding the optimal performance, we give a convex program that can adaptively and efficiently compute the optimal contract.

no code implementations • 7 Jul 2023 • Stephen Bates, Michael I. Jordan, Michael Sklar, Jake A. Soloff

We show how the principal can conduct statistical inference that leverages the information that is revealed by an agent's strategic behavior -- their choice to run a trial or not.

1 code implementation • 15 Jun 2023 • Tiffany Ding, Anastasios N. Angelopoulos, Stephen Bates, Michael I. Jordan, Ryan J. Tibshirani

Standard conformal prediction methods provide a marginal coverage guarantee, which means that for a random test point, the conformal prediction set contains the true label with a user-chosen probability.

no code implementations • 24 May 2023 • Serena Wang, Stephen Bates, P. M. Aronow, Michael I. Jordan

From the social sciences to machine learning, it has been well documented that metrics to be optimized are not always aligned with social welfare.

1 code implementation • 23 Jan 2023 • Anastasios N. Angelopoulos, Stephen Bates, Clara Fannjiang, Michael I. Jordan, Tijana Zrnic

We introduce prediction-powered inference $\unicode{x2013}$ a framework for performing valid statistical inference when an experimental data set is supplemented with predictions from a machine-learning system.

no code implementations • 10 Nov 2022 • Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, Michael I. Jordan

This result shows that exponential-in-$m$ samples are sufficient and necessary to learn a near-optimal contract, resolving an open problem on the hardness of online contract design.

no code implementations • 28 Sep 2022 • Shai Feldman, Bat-Sheva Einbinder, Stephen Bates, Anastasios N. Angelopoulos, Asaf Gendler, Yaniv Romano

In such cases, we can also correct for noise of bounded size in the conformal prediction algorithm in order to ensure achieving the correct risk of the ground truth labels without score or data regularity.

1 code implementation • 4 Aug 2022 • Anastasios N. Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, Tal Schuster

We extend conformal prediction to control the expected value of any monotone loss function.

1 code implementation • 20 Jul 2022 • Swami Sankaranarayanan, Anastasios N. Angelopoulos, Stephen Bates, Yaniv Romano, Phillip Isola

Meaningful uncertainty quantification in computer vision requires reasoning about semantic information -- say, the hair color of the person in a photo or the location of a car on the street.

no code implementations • 4 Jul 2022 • Anastasios N. Angelopoulos, Karl Krauth, Stephen Bates, Yixin Wang, Michael I. Jordan

Building from a pre-trained ranking model, we show how to return a set of items that is rigorously guaranteed to contain mostly good items.

no code implementations • 6 Jun 2022 • Yaodong Yu, Stephen Bates, Yi Ma, Michael I. Jordan

Uncertainty quantification is essential for the reliable deployment of machine learning models to high-stakes application domains.

1 code implementation • 18 May 2022 • Shai Feldman, Liran Ringel, Stephen Bates, Yaniv Romano

To provide rigorous uncertainty quantification for online learning models, we develop a framework for constructing uncertainty sets that provably control risk -- such as coverage of confidence intervals, false negative rate, or F1 score -- in the online setting.

no code implementations • 13 May 2022 • Stephen Bates, Michael I. Jordan, Michael Sklar, Jake A. Soloff

The pharmaceutical company wishes to sell a product to make a profit, and the FDA wishes to ensure that only efficacious drugs are released to the public.

1 code implementation • 10 Feb 2022 • Anastasios N Angelopoulos, Amit P Kohli, Stephen Bates, Michael I Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, Yaniv Romano

Image-to-image regression is an important learning task, used frequently in biological imaging.

1 code implementation • 8 Feb 2022 • Clara Fannjiang, Stephen Bates, Anastasios N. Angelopoulos, Jennifer Listgarten, Michael I. Jordan

This is challenging because of a characteristic type of distribution shift between the training and test data in the design setting -- one in which the training and test data are statistically dependent, as the latter is chosen based on the former.

1 code implementation • 25 Jan 2022 • Mariel A. Werner, Anastasios Angelopoulos, Stephen Bates, Michael I. Jordan

The blessing of ubiquitous data also comes with a curse: the communication, storage, and labeling of massive, mostly redundant datasets.

1 code implementation • 3 Oct 2021 • Anastasios N. Angelopoulos, Stephen Bates, Emmanuel J. Candès, Michael I. Jordan, Lihua Lei

We introduce a framework for calibrating machine learning models so that their predictions satisfy explicit, finite-sample statistical guarantees.

1 code implementation • 2 Oct 2021 • Shai Feldman, Stephen Bates, Yaniv Romano

We develop a method to generate predictive regions that cover a multivariate response variable with a user-specified probability.

2 code implementations • 15 Jul 2021 • Anastasios N. Angelopoulos, Stephen Bates

Conformal prediction is a user-friendly paradigm for creating statistically rigorous uncertainty sets/intervals for the predictions of such models.

no code implementations • NeurIPS 2021 • Celestine Mendler-Dünner, Wenshuo Guo, Stephen Bates, Michael I. Jordan

An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points.

1 code implementation • NeurIPS 2021 • Shai Feldman, Stephen Bates, Yaniv Romano

To remedy this, we modify the loss function to promote independence between the size of the intervals and the indicator of a miscoverage event.

1 code implementation • 16 Apr 2021 • Stephen Bates, Emmanuel Candès, Lihua Lei, Yaniv Romano, Matteo Sesia

We then introduce a new method to compute p-values that are both valid conditionally on the training data and independent of each other for different test points; this paves the way to stronger type-I error guarantees.

2 code implementations • 1 Apr 2021 • Stephen Bates, Trevor Hastie, Robert Tibshirani

Cross-validation is a widely-used technique to estimate prediction error, but its behavior is complex and not fully understood.

1 code implementation • 11 Feb 2021 • Anastasios N. Angelopoulos, Stephen Bates, Tijana Zrnic, Michael I. Jordan

Our method follows the general approach of split conformal prediction; we use holdout data to calibrate the size of the prediction sets but preserve privacy by using a privatized quantile subroutine.

3 code implementations • 7 Jan 2021 • Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, Michael I. Jordan

While improving prediction accuracy has been the focus of machine learning in recent years, this alone does not suffice for reliable decision-making.

2 code implementations • ICLR 2021 • Anastasios Angelopoulos, Stephen Bates, Jitendra Malik, Michael. I. Jordan

Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings.

1 code implementation • NeurIPS 2020 • Yaniv Romano, Stephen Bates, Emmanuel J. Candès

We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness.

1 code implementation • 1 Mar 2019 • Stephen Bates, Emmanuel Candès, Lucas Janson, Wenshuo Wang

Model-X knockoffs is a wrapper that transforms essentially any feature importance measure into a variable selection algorithm, which discovers true effects while rigorously controlling the expected fraction of false positives.

Methodology

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.