Search Results for author: Kristina Peters

Found 1 papers, 0 papers with code

What Is Fairness? On the Role of Protected Attributes and Fictitious Worlds

no code implementations19 May 2022 Ludwig Bothmann, Kristina Peters, Bernd Bischl

A growing body of literature in fairness-aware ML (fairML) aspires to mitigate machine learning (ML)-related unfairness in automated decision-making (ADM) by defining metrics that measure fairness of an ML model and by proposing methods that ensure that trained ML models achieve low values in those metrics.

Decision Making Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.