1 code implementation • 20 Feb 2025 • Evan Frick, Connor Chen, Joseph Tennyson, Tianle Li, Wei-Lin Chiang, Anastasios N. Angelopoulos, Ion Stoica
To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces leaderboards specific to a prompt.
1 code implementation • 14 Jan 2025 • Anastasios N. Angelopoulos, Michael I. Jordan, Ryan J. Tibshirani
We present a new perspective on online learning that we refer to as gradient equilibrium: a sequence of iterates achieves gradient equilibrium if the average of gradients of losses along the sequence converges to zero.
no code implementations • 18 Nov 2024 • Anastasios N. Angelopoulos, Rina Foygel Barber, Stephen Bates
This book is about conformal prediction and related inferential techniques that build on permutation tests and exchangeability.
1 code implementation • 18 Oct 2024 • Evan Frick, Tianle Li, Connor Chen, Wei-Lin Chiang, Anastasios N. Angelopoulos, Jiantao Jiao, Banghua Zhu, Joseph E. Gonzalez, Ion Stoica
To address this, we build a predictive model of downstream LLM performance by evaluating the reward model on proxy tasks.
1 code implementation • 28 Mar 2024 • Drew T. Nguyen, Reese Pathak, Anastasios N. Angelopoulos, Stephen Bates, Michael I. Jordan
Decision-making pipelines are generally characterized by tradeoffs among various risk functions.
1 code implementation • 9 Mar 2024 • Pierre Boyeau, Anastasios N. Angelopoulos, Nir Yosef, Jitendra Malik, Michael I. Jordan
The evaluation of machine learning models using human-labeled validation data can be expensive and time-consuming.
no code implementations • 12 Feb 2024 • Amit Kohli, Anastasios N. Angelopoulos, Laura Waller
The performance of an imaging system is limited by optical aberrations, which cause blurriness in the resulting image.
1 code implementation • 2 Feb 2024 • Anastasios N. Angelopoulos, Rina Foygel Barber, Stephen Bates
We introduce a method for online conformal prediction with decaying step sizes.
1 code implementation • 2 Nov 2023 • Anastasios N. Angelopoulos, John C. Duchi, Tijana Zrnic
We present PPI++: a computationally lightweight methodology for estimation and inference based on a small labeled dataset and a typically much larger dataset of machine-learning predictions.
no code implementations • 9 Oct 2023 • Jordan Lekeufack, Anastasios N. Angelopoulos, Andrea Bajcsy, Michael I. Jordan, Jitendra Malik
We introduce Conformal Decision Theory, a framework for producing safe autonomous decisions despite imperfect machine learning predictions.
1 code implementation • 31 Jul 2023 • Anastasios N. Angelopoulos, Emmanuel J. Candes, Ryan J. Tibshirani
We study the problem of uncertainty quantification for time series prediction, with the goal of providing easy-to-use algorithms with formal guarantees.
1 code implementation • NeurIPS 2023 • Tiffany Ding, Anastasios N. Angelopoulos, Stephen Bates, Michael I. Jordan, Ryan J. Tibshirani
Standard conformal prediction methods provide a marginal coverage guarantee, which means that for a random test point, the conformal prediction set contains the true label with a user-specified probability.
2 code implementations • 23 Jan 2023 • Anastasios N. Angelopoulos, Stephen Bates, Clara Fannjiang, Michael I. Jordan, Tijana Zrnic
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
no code implementations • 28 Sep 2022 • Bat-Sheva Einbinder, Shai Feldman, Stephen Bates, Anastasios N. Angelopoulos, Asaf Gendler, Yaniv Romano
We study the robustness of conformal prediction, a powerful tool for uncertainty quantification, to label noise.
1 code implementation • 4 Aug 2022 • Anastasios N. Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, Tal Schuster
We extend conformal prediction to control the expected value of any monotone loss function.
1 code implementation • 20 Jul 2022 • Swami Sankaranarayanan, Anastasios N. Angelopoulos, Stephen Bates, Yaniv Romano, Phillip Isola
Meaningful uncertainty quantification in computer vision requires reasoning about semantic information -- say, the hair color of the person in a photo or the location of a car on the street.
1 code implementation • 5 Jul 2022 • Charles Lu, Anastasios N. Angelopoulos, Stuart Pomerantz
Our work applies these new uncertainty quantification methods -- specifically conformal prediction -- to a deep-learning model for grading the severity of spinal stenosis in lumbar spine MRI.
no code implementations • 4 Jul 2022 • Anastasios N. Angelopoulos, Karl Krauth, Stephen Bates, Yixin Wang, Michael I. Jordan
Building from a pre-trained ranking model, we show how to return a set of items that is rigorously guaranteed to contain mostly good items.
no code implementations • 17 Jun 2022 • Amit Kohli, Anastasios N. Angelopoulos, David McAllister, Esther Whang, Sixian You, Kyrollos Yanny, Federico M. Gasparoli, Bo-Jui Chang, Reto Fiolka, Laura Waller
The most ubiquitous form of computational aberration correction for microscopy is deconvolution.
1 code implementation • 8 Feb 2022 • Clara Fannjiang, Stephen Bates, Anastasios N. Angelopoulos, Jennifer Listgarten, Michael I. Jordan
This is challenging because of a characteristic type of distribution shift between the training and test data in the design setting -- one in which the training and test data are statistically dependent, as the latter is chosen based on the former.
1 code implementation • 3 Oct 2021 • Anastasios N. Angelopoulos, Stephen Bates, Emmanuel J. Candès, Michael I. Jordan, Lihua Lei
We introduce a framework for calibrating machine learning models so that their predictions satisfy explicit, finite-sample statistical guarantees.
4 code implementations • 15 Jul 2021 • Anastasios N. Angelopoulos, Stephen Bates
Conformal prediction is a user-friendly paradigm for creating statistically rigorous uncertainty sets/intervals for the predictions of such models.
1 code implementation • 11 Feb 2021 • Anastasios N. Angelopoulos, Stephen Bates, Tijana Zrnic, Michael I. Jordan
Our method follows the general approach of split conformal prediction; we use holdout data to calibrate the size of the prediction sets but preserve privacy by using a privatized quantile subroutine.
1 code implementation • 7 Apr 2020 • Anastasios N. Angelopoulos, Julien N. P. Martel, Amit P. S. Kohli, Jorg Conradt, Gordon Wetzstein
The cameras in modern gaze-tracking systems suffer from fundamental bandwidth and power limitations, constraining data acquisition speed to 300 Hz realistically.