no code implementations • 18 Jul 2023 • Sumegha Garg, Christopher Jung, Omer Reingold, Aaron Roth
We develop a new online multicalibration algorithm that is well defined for infinite benchmark classes $F$, and is oracle efficient (i. e. for any class $F$, the algorithm has the form of an efficient reduction to a no-regret learning algorithm for $F$).
no code implementations • 14 Jul 2023 • Omer Reingold, Judy Hanwen Shen, Aditi Talati
While explainability is a desirable characteristic of increasingly complex black-box models, modern explanation methods have been shown to be inconsistent and contradictory.
no code implementations • 24 Feb 2023 • Lunjia Hu, Inbal Livni-Navon, Omer Reingold
In this we extend the work of Goldreich, Goldwasser and Nussboim (SICOMP 2010) that focused on the implementation of huge objects that are indistinguishable from the uniform distribution, satisfying some global properties (which they coined truthfulness).
no code implementations • NeurIPS 2023 • Parikshit Gopalan, Michael P. Kim, Omer Reingold
We establish an equivalence between swap variants of omniprediction and multicalibration and swap agnostic learning.
no code implementations • 16 Oct 2022 • Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, Udi Wieder
This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration.
no code implementations • 15 Sep 2022 • Lunjia Hu, Inbal Livni-Navon, Omer Reingold, Chutong Yang
In this paper, we introduce omnipredictors for constrained optimization and study their complexity and implications.
no code implementations • 9 Mar 2022 • Lunjia Hu, Charlotte Peale, Omer Reingold
In this setting, we show that the sample complexity of outcome indistinguishability is characterized by the fat-shattering dimension of $D$.
1 code implementation • 28 Feb 2022 • Parikshit Gopalan, Nina Narodytska, Omer Reingold, Vatsal Sharan, Udi Wieder
Estimating the Kullback-Leibler (KL) divergence between two distributions given samples from them is well-studied in machine learning and information theory.
no code implementations • 11 Sep 2021 • Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, Udi Wieder
We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action.
no code implementations • 10 Mar 2021 • Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder
We significantly strengthen previous work that use the MaxEntropy approach, that define the importance weights based on a distribution $Q$ closest to $P$, that looks the same as $R$ on every set $C \in \mathcal{C}$, where $\mathcal{C}$ may be a huge collection of sets.
no code implementations • 26 Nov 2020 • Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona
Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities" -- what is the probability of 5-year survival after cancer diagnosis?
no code implementations • 18 Aug 2020 • Lunjia Hu, Omer Reingold
We study the problem of robustly estimating the mean of a $d$-dimensional distribution given $N$ examples, where most coordinates of every example may be missing and $\varepsilon N$ examples may be arbitrarily corrupted.
no code implementations • 22 Apr 2019 • Sumegha Garg, Michael P. Kim, Omer Reingold
As algorithmic prediction systems have become widespread, fears that these systems may inadvertently discriminate against members of underrepresented populations have grown.
no code implementations • ICML 2018 • Ursula Hebert-Johnson, Michael Kim, Omer Reingold, Guy Rothblum
We develop and study multicalibration as a new measure of fairness in machine learning that aims to mitigate inadvertent or malicious discrimination that is introduced at training time (even from ground truth data).
no code implementations • NeurIPS 2018 • Michael P. Kim, Omer Reingold, Guy N. Rothblum
We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS '12], which assumes the existence of a metric that measures similarity between pairs of individuals.
1 code implementation • 22 Nov 2017 • Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, Guy N. Rothblum
We develop and study multicalbration -- a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data.
1 code implementation • NeurIPS 2015 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth
We also formalize and address the general problem of data reuse in adaptive data analysis.
no code implementations • 10 Nov 2014 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth
We show that, surprisingly, there is a way to estimate an exponential in $n$ number of expectations accurately even if the functions are chosen adaptively.