# A Technical and Normative Investigation of Social Bias Amplification

1 Jan 2021  ·  , ·

The conversation around the fairness of machine learning models is growing and evolving. In this work, we focus on one particular component, the issue of bias amplification: the tendency of models trained from data containing social biases to further amplify these biases. By definition, this problem stems from the algorithm, and cannot be attributed to the dataset. We make three contributions regarding its measurement. First, building off of Zhao et al. (2017), we introduce and analyze a new, decoupled metric for measuring bias amplification, $\text{BiasAmp}_{\rightarrow}$, which possesses a number of attractive properties, including the ability to pinpoint the cause of bias amplification. Second, we demonstrate the lack of consistency in values reported by fairness metrics across models that are equally accurate, and encourage the use of confidence intervals when reporting such fairness measures. Finally, we consider what bias amplification means in the context of domains where labels either don't exist at test time, or correspond to uncertain future events. We provide a deeply interrogative look at the technical measurement of bias amplification, guided by our normative ideas of what we want it to encompass.

PDF Abstract

## Code Add Remove Mark official

No code implementations yet. Submit your code now

## Results from the Paper Add Remove

Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.