Improving Fair Predictions Using Variational Inference In Causal Models

25 Aug 2020  ·  Rik Helwegen, Christos Louizos, Patrick Forré ·

The importance of algorithmic fairness grows with the increasing impact machine learning has on people's lives. Recent work on fairness metrics shows the need for causal reasoning in fairness constraints. In this work, a practical method named FairTrade is proposed for creating flexible prediction models which integrate fairness constraints on sensitive causal paths. The method uses recent advances in variational inference in order to account for unobserved confounders. Further, a method outline is proposed which uses the causal mechanism estimates to audit black box models. Experiments are conducted on simulated data and on a real dataset in the context of detecting unlawful social welfare. This research aims to contribute to machine learning techniques which honour our ethical and legal boundaries.

PDF Abstract


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here