A Causal Linear Model to Quantify Edge Flow and Edge Unfairness for UnfairEdge Prioritization and Discrimination Removal

10 Jul 2020  ·  Pavan Ravishankar, Pranshu Malviya, Balaraman Ravindran ·

Law enforcement must prioritize sources of unfairness before mitigating their underlying unfairness, considering that they have limited resources. Unlike previous works that only make cautionary claims of discrimination and de-biases data after its generation, this paper attempts to prioritize unfair sources before mitigating their unfairness in the real-world. We assume that a causal bayesian network, representative of the data generation procedure, along with the sensitive nodes, that result in unfairness, are given. We quantify Edge Flow, which is the belief flowing along an edge by attenuating the indirect path influences, and use it to quantify Edge Unfairness. We prove that cumulative unfairness is non-existent in any decision, like judicial bail, towards any sensitive groups, like race, when the edge unfairness is absent, given an error-free linear model of conditional probability. We then measure the potential to mitigate the cumulative unfairness when edge unfairness is decreased. Based on these measures, we propose an unfair edge prioritization algorithm that prioritizes the unfair edges and a discrimination removal procedure that de-biases the generated data distribution. The experimental section validates the specifications used for quantifying the above measures.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here