Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via Disqualification

2 Oct 2021  ·  Guy N. Rothblum, Gal Yona ·

In many machine learning settings there is an inherent tension between fairness and accuracy desiderata. How should one proceed in light of such trade-offs? In this work we introduce and study $\gamma$-disqualification, a new framework for reasoning about fairness-accuracy tradeoffs w.r.t a benchmark class $H$ in the context of supervised learning. Our requirement stipulates that a classifier should be disqualified if it is possible to improve its fairness by switching to another classifier from $H$ without paying "too much" in accuracy. The notion of "too much" is quantified via a parameter $\gamma$ that serves as a vehicle for specifying acceptable tradeoffs between accuracy and fairness, in a way that is independent from the specific metrics used to quantify fairness and accuracy in a given task. Towards this objective, we establish principled translations between units of accuracy and units of (un)fairness for different accuracy measures. We show $\gamma$-disqualification can be used to easily compare different learning strategies in terms of how they trade-off fairness and accuracy, and we give an efficient reduction from the problem of finding the optimal classifier that satisfies our requirement to the problem of approximating the Pareto frontier of $H$.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here