Interventions for Ranking in the Presence of Implicit Bias

23 Jan 2020  ·  L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi ·

Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group (e.g., defined by gender or race). Studies on implicit bias have shown that these unconscious stereotypes can have adverse outcomes in various social contexts, such as job screening, teaching, or policing. Recently, (Kleinberg and Raghavan, 2018) considered a mathematical model for implicit bias and showed the effectiveness of the Rooney Rule as a constraint to improve the utility of the outcome for certain cases of the subset selection problem. Here we study the problem of designing interventions for the generalization of subset selection -- ranking -- that requires to output an ordered set and is a central primitive in various social and computational contexts. We present a family of simple and interpretable constraints and show that they can optimally mitigate implicit bias for a generalization of the model studied in (Kleinberg and Raghavan, 2018). Subsequently, we prove that under natural distributional assumptions on the utilities of items, simple, Rooney Rule-like, constraints can also surprisingly recover almost all the utility lost due to implicit biases. Finally, we augment our theoretical results with empirical findings on real-world distributions from the IIT-JEE (2009) dataset and the Semantic Scholar Research corpus.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here