1 code implementation • 6 Jun 2022 • Joachim Baumann, Corinna Hertweck, Michele Loi, Christoph Heitz
Group fairness metrics are an established way of assessing the fairness of prediction-based decision-making systems.
2 code implementations • 6 Jun 2022 • Corinna Hertweck, Joachim Baumann, Michele Loi, Eleonora Viganò, Christoph Heitz
This allows us to derive a fairness score that we then compare to the decision maker's utility.
1 code implementation • 9 Sep 2021 • Corinna Hertweck, Tim Räz
This establishes that it is possible to increase the degree to which some fairness measures are satisfied at the same time -- some fairness measures are gradually compatible.
no code implementations • 9 Sep 2021 • Corinna Hertweck, Christoph Heitz
While the field of algorithmic fairness has brought forth many ways to measure and improve the fairness of machine learning models, these findings are still not widely used in practice.
no code implementations • 4 Nov 2020 • Corinna Hertweck, Christoph Heitz, Michele Loi
This means that the question of whether independence should be used or not cannot be satisfactorily answered by only considering the justness of differences in the predictive features.
no code implementations • 2 Jul 2020 • Corinna Hertweck, Carlos Castillo, Michael Mathioudakis
In this paper, we study university admissions under a centralized system that uses grades and standardized test scores to match applicants to university programs.