Search Results for author: Guy N. Rothblum

Found 7 papers, 0 papers with code

Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via Disqualification

no code implementations2 Oct 2021 Guy N. Rothblum, Gal Yona

The notion of "too much" is quantified via a parameter $\gamma$ that serves as a vehicle for specifying acceptable tradeoffs between accuracy and fairness, in a way that is independent from the specific metrics used to quantify fairness and accuracy in a given task.

Fairness

Outcome Indistinguishability

no code implementations26 Nov 2020 Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona

Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities" -- what is the probability of 5-year survival after cancer diagnosis?

Abstracting Fairness: Oracles, Metrics, and Interpretability

no code implementations4 Apr 2020 Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, Pragya Sur

Our principal conceptual result is an extraction procedure that learns the underlying truth; moreover, the procedure can learn an approximation to this truth given access to a weak form of the oracle.

Fairness General Classification

Preference-Informed Fairness

no code implementations3 Apr 2019 Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, Gal Yona

We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness.

Decision Making Fairness

Fairness Through Computationally-Bounded Awareness

no code implementations NeurIPS 2018 Michael P. Kim, Omer Reingold, Guy N. Rothblum

We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS '12], which assumes the existence of a metric that measures similarity between pairs of individuals.

Fairness

Probably Approximately Metric-Fair Learning

no code implementations ICML 2018 Guy N. Rothblum, Gal Yona

We show that approximate metric-fairness {\em does} generalize, and leverage these generalization guarantees to construct polynomial-time PACF learning algorithms for the classes of linear and logistic predictors.

Fairness

Calibration for the (Computationally-Identifiable) Masses

no code implementations22 Nov 2017 Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, Guy N. Rothblum

We develop and study multicalbration -- a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.