Search Results for author: Omer Reingold

Found 18 papers, 3 papers with code

Oracle Efficient Online Multicalibration and Omniprediction

no code implementations18 Jul 2023 Sumegha Garg, Christopher Jung, Omer Reingold, Aaron Roth

We develop a new online multicalibration algorithm that is well defined for infinite benchmark classes $F$, and is oracle efficient (i. e. for any class $F$, the algorithm has the form of an efficient reduction to a no-regret learning algorithm for $F$).

Fairness

Dissenting Explanations: Leveraging Disagreement to Reduce Model Overreliance

no code implementations14 Jul 2023 Omer Reingold, Judy Hanwen Shen, Aditi Talati

While explainability is a desirable characteristic of increasingly complex black-box models, modern explanation methods have been shown to be inconsistent and contradictory.

Generative Models of Huge Objects

no code implementations24 Feb 2023 Lunjia Hu, Inbal Livni-Navon, Omer Reingold

In this we extend the work of Goldreich, Goldwasser and Nussboim (SICOMP 2010) that focused on the implementation of huge objects that are indistinguishable from the uniform distribution, satisfying some global properties (which they coined truthfulness).

Fairness Learning Theory +1

Swap Agnostic Learning, or Characterizing Omniprediction via Multicalibration

no code implementations NeurIPS 2023 Parikshit Gopalan, Michael P. Kim, Omer Reingold

We establish an equivalence between swap variants of omniprediction and multicalibration and swap agnostic learning.

Fairness

Loss Minimization through the Lens of Outcome Indistinguishability

no code implementations16 Oct 2022 Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, Udi Wieder

This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration.

Fairness

Omnipredictors for Constrained Optimization

no code implementations15 Sep 2022 Lunjia Hu, Inbal Livni-Navon, Omer Reingold, Chutong Yang

In this paper, we introduce omnipredictors for constrained optimization and study their complexity and implications.

Fairness

Metric Entropy Duality and the Sample Complexity of Outcome Indistinguishability

no code implementations9 Mar 2022 Lunjia Hu, Charlotte Peale, Omer Reingold

In this setting, we show that the sample complexity of outcome indistinguishability is characterized by the fat-shattering dimension of $D$.

PAC learning

KL Divergence Estimation with Multi-group Attribution

1 code implementation28 Feb 2022 Parikshit Gopalan, Nina Narodytska, Omer Reingold, Vatsal Sharan, Udi Wieder

Estimating the Kullback-Leibler (KL) divergence between two distributions given samples from them is well-studied in machine learning and information theory.

Fairness

Omnipredictors

no code implementations11 Sep 2021 Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, Udi Wieder

We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action.

Fairness

Multicalibrated Partitions for Importance Weights

no code implementations10 Mar 2021 Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder

We significantly strengthen previous work that use the MaxEntropy approach, that define the importance weights based on a distribution $Q$ closest to $P$, that looks the same as $R$ on every set $C \in \mathcal{C}$, where $\mathcal{C}$ may be a huge collection of sets.

Anomaly Detection Domain Adaptation

Outcome Indistinguishability

no code implementations26 Nov 2020 Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona

Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities" -- what is the probability of 5-year survival after cancer diagnosis?

Robust Mean Estimation on Highly Incomplete Data with Arbitrary Outliers

no code implementations18 Aug 2020 Lunjia Hu, Omer Reingold

We study the problem of robustly estimating the mean of a $d$-dimensional distribution given $N$ examples, where most coordinates of every example may be missing and $\varepsilon N$ examples may be arbitrarily corrupted.

Tracking and Improving Information in the Service of Fairness

no code implementations22 Apr 2019 Sumegha Garg, Michael P. Kim, Omer Reingold

As algorithmic prediction systems have become widespread, fears that these systems may inadvertently discriminate against members of underrepresented populations have grown.

Decision Making Fairness +1

Multicalibration: Calibration for the (Computationally-Identifiable) Masses

no code implementations ICML 2018 Ursula Hebert-Johnson, Michael Kim, Omer Reingold, Guy Rothblum

We develop and study multicalibration as a new measure of fairness in machine learning that aims to mitigate inadvertent or malicious discrimination that is introduced at training time (even from ground truth data).

Fairness

Fairness Through Computationally-Bounded Awareness

no code implementations NeurIPS 2018 Michael P. Kim, Omer Reingold, Guy N. Rothblum

We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS '12], which assumes the existence of a metric that measures similarity between pairs of individuals.

Fairness

Calibration for the (Computationally-Identifiable) Masses

1 code implementation22 Nov 2017 Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, Guy N. Rothblum

We develop and study multicalbration -- a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data.

Fairness

Preserving Statistical Validity in Adaptive Data Analysis

no code implementations10 Nov 2014 Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth

We show that, surprisingly, there is a way to estimate an exponential in $n$ number of expectations accurately even if the functions are chosen adaptively.

Two-sample testing

Cannot find the paper you are looking for? You can Submit a new open access paper.