Does the End Justify the Means? On the Moral Justification of Fairness-Aware Machine Learning

17 Feb 2022  ·  Hilde Weerts, Lambèr Royakkers, Mykola Pechenizkiy ·

Fairness-aware machine learning (fair-ml) techniques are algorithmic interventions designed to ensure that individuals who are affected by the predictions of a machine learning model are treated fairly, typically measured in terms of a quantitative fairness metric. Despite the multitude of fairness metrics and fair-ml algorithms, there is still little guidance on the suitability of different approaches in practice. In this paper, we present a framework for moral reasoning about the justification of fairness metrics and explore the moral implications of the use of fair-ml algorithms that optimize for them. In particular, we argue that whether a distribution of outcomes is fair, depends not only on the cause of inequalities but also on what moral claims decision subjects have to receive a particular benefit or avoid a burden. We use our framework to analyze the suitability of two fairness metrics under different circumstances. Subsequently, we explore moral arguments that support or reject the use of the fair-ml algorithm introduced by Hardt et al. (2016). We argue that under very specific circumstances, particular metrics correspond to a fair distribution of burdens and benefits. However, we also illustrate that enforcing a fairness metric by means of a fair-ml algorithm may not result in the fair distribution of outcomes and can have several undesirable side effects. We end with a call for a more holistic evaluation of fair-ml algorithms, beyond their direct optimization objectives.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here