no code implementations • 8 Oct 2020 • Gareth P. Jones, James M. Hickey, Pietro G. Di Stefano, Charanpal Dhanjal, Laura C. Stoddart, Vlasios Vasileiou
We found that fairness-unaware algorithms typically fail to produce adequately fair models and that the simplest algorithms are not necessarily the fairest ones.
no code implementations • 11 Mar 2020 • James M. Hickey, Pietro G. Di Stefano, Vlasios Vasileiou
To satisfy this definition, we develop a framework for mitigating model bias using regularizations constructed from the SHAP values of an adversarial surrogate model.
no code implementations • 25 Feb 2020 • Pietro G. Di Stefano, James M. Hickey, Vlasios Vasileiou
We develop regularizations to tackle classical fairness measures and present a causal regularization that satisfies our new fairness definition by removing the impact of unprivileged group variables on the model outcomes as measured by the CDE.