Search Results for author: Julia Stoyanovich

Found 22 papers, 2 papers with code

ShaRP: Explaining Rankings with Shapley Values

no code implementations30 Jan 2024 Venetia Pliatsika, Joao Fonseca, Tilun Wang, Julia Stoyanovich

Using ShaRP, we show that even when the scoring function used by an algorithmic ranker is known and linear, the weight of each feature does not correspond to its Shapley value contribution.

Fairness in Algorithmic Recourse Through the Lens of Substantive Equality of Opportunity

no code implementations29 Jan 2024 Andrew Bell, Joao Fonseca, Carlo Abrate, Francesco Bonchi, Julia Stoyanovich

Building upon an agent-based framework for simulating recourse, this paper demonstrates how much effort is needed to overcome disparities in initial circumstances.

Decision Making Fairness

A New Paradigm for Counterfactual Reasoning in Fairness and Recourse

no code implementations25 Jan 2024 Lucius E. J. Bynum, Joshua R. Loftus, Julia Stoyanovich

The traditional paradigm for counterfactual reasoning in this literature is the interventional counterfactual, where hypothetical interventions are imagined and simulated.

counterfactual Counterfactual Reasoning +1

A Simple and Practical Method for Reducing the Disparate Impact of Differential Privacy

no code implementations18 Dec 2023 Lucas Rosenblatt, Julia Stoyanovich, Christopher Musco

Our theoretical results center on the private mean estimation problem, while our empirical results center on extensive experiments on private data synthesis to demonstrate the effectiveness of stratification on a variety of private mechanisms.

Setting the Right Expectations: Algorithmic Recourse Over Time

no code implementations13 Sep 2023 Joao Fonseca, Andrew Bell, Carlo Abrate, Francesco Bonchi, Julia Stoyanovich

The bulk of the literature on algorithmic recourse to-date focuses primarily on how to provide recourse to a single individual, overlooking a critical element: the effects of a continuously changing context.

Decision Making

The Unbearable Weight of Massive Privilege: Revisiting Bias-Variance Trade-Offs in the Context of Fair Prediction

no code implementations17 Feb 2023 Falaah Arif Khan, Julia Stoyanovich

In this paper we revisit the bias-variance decomposition of model error from the perspective of designing a fair classifier: we are motivated by the widely held socio-technical belief that noise variance in large datasets in social domains tracks demographic characteristics such as gender, race, disability, etc.


The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice

no code implementations13 Feb 2023 Andrew Bell, Lucius Bynum, Nazarii Drushchak, Tetiana Herasymova, Lucas Rosenblatt, Julia Stoyanovich

The ``impossibility theorem'' -- which is considered foundational in algorithmic fairness literature -- asserts that there must be trade-offs between common notions of fairness and performance when fitting statistical models, except in two special cases: when the prevalence of the outcome being predicted is equal across groups, or when a perfectly accurate predictor is used.


On Fairness and Stability: Is Estimator Variance a Friend or a Foe?

no code implementations9 Feb 2023 Falaah Arif Khan, Denys Herasymuk, Julia Stoyanovich

We demonstrate when group-wise statistical bias analysis gives an incomplete picture, and what group-wise variance analysis can tell us in settings that differ in the magnitude of statistical bias.

Fairness Uncertainty Quantification

Counterfactuals for the Future

no code implementations7 Dec 2022 Lucius E. J. Bynum, Joshua R. Loftus, Julia Stoyanovich

Counterfactuals are often described as 'retrospective,' focusing on hypothetical alternatives to a realized past.


Towards Substantive Conceptions of Algorithmic Fairness: Normative Guidance from Equal Opportunity Doctrines

no code implementations6 Jul 2022 Falaah Arif Khan, Eleni Manis, Julia Stoyanovich

In this work we use Equal Oppportunity (EO) doctrines from political philosophy to make explicit the normative judgements embedded in different conceptions of algorithmic fairness.

Fairness Philosophy

Think About the Stakeholders First! Towards an Algorithmic Transparency Playbook for Regulatory Compliance

no code implementations10 Jun 2022 Andrew Bell, Oded Nov, Julia Stoyanovich

Increasingly, laws are being proposed and passed by governments around the world to regulate Artificial Intelligence (AI) systems implemented into the public and private sectors.

Spending Privacy Budget Fairly and Wisely

no code implementations27 Apr 2022 Lucas Rosenblatt, Joshua Allen, Julia Stoyanovich

Our methods are based on the insights that feature importance can inform how privacy budget is allocated, and, further, that per-group feature importance and fairness-related performance objectives can be incorporated in the allocation.

Fairness Feature Importance +1

An External Stability Audit Framework to Test the Validity of Personality Prediction in AI Hiring

no code implementations23 Jan 2022 Alene K. Rhea, Kelsey Markey, Lauren D'Arinzo, Hilke Schellmann, Mona Sloane, Paul Squires, Falaah Arif Kahn, Julia Stoyanovich

Our approach is to (a) develop a methodology for an external audit of stability of predictions made by algorithmic personality tests, and (b) instantiate this methodology in an audit of two systems, Humantic AI and Crystal.

Disaggregated Interventions to Reduce Inequality

1 code implementation1 Jul 2021 Lucius E. J. Bynum, Joshua R. Loftus, Julia Stoyanovich

We develop a disaggregated approach to tackling pre-existing disparities that relaxes the typical set of assumptions required for the use of social categories in structural causal models.

Fairness as Equality of Opportunity: Normative Guidance from Political Philosophy

no code implementations15 Jun 2021 Falaah Arif Khan, Eleni Manis, Julia Stoyanovich

Through our EOP-framework we hope to answer what it means for an ADS to be fair from a moral and political philosophy standpoint, and to pave the way for similar scholarship from ethics and legal experts.

Ethics Fairness +1

Fairness in Ranking: A Survey

no code implementations25 Mar 2021 Meike Zehlike, Ke Yang, Julia Stoyanovich

In this survey, we describe four classification frameworks for fairness-enhancing interventions, along which we relate the technical methods surveyed in this paper, discuss evaluation datasets, and present technical work on fairness in score-based ranking.

Fairness Information Retrieval +4

Fairness and Friends

no code implementations ICLR Workshop Rethinking_ML_Papers 2021 Falaah Arif Khan, Eleni Manis, Julia Stoyanovich

Recent interest in codifying fairness in Automated Decision Systems (ADS) has resulted in a wide range of formulations of what it means for an algorithm to be “fair.” Most of these propositions are inspired by, but inadequately grounded in, scholarship from political philosophy.

Fairness Philosophy

Causal intersectionality for fair ranking

2 code implementations15 Jun 2020 Ke Yang, Joshua R. Loftus, Julia Stoyanovich

In this paper we propose a causal modeling approach to intersectional fairness, and a flexible, task-specific method for computing intersectionally fair rankings.

Causal Inference Fairness

Teaching Responsible Data Science: Charting New Pedagogical Territory

no code implementations23 Dec 2019 Julia Stoyanovich, Armanda Lewis

Recounting our own experience, and leveraging literature on pedagogical methods in data science and beyond, we propose the notion of an "object-to-interpret-with".

Decision Making Ethics

FairPrep: Promoting Data to a First-Class Citizen in Studies on Fairness-Enhancing Interventions

no code implementations28 Nov 2019 Sebastian Schelter, Yuxuan He, Jatin Khilnani, Julia Stoyanovich

FairPrep is based on a developer-centered design, and helps data scientists follow best practices in software engineering and machine learning.

BIG-bench Machine Learning Decision Making +2

Balanced Ranking with Diversity Constraints

no code implementations4 Jun 2019 Ke Yang, Vasilis Gkatzelis, Julia Stoyanovich

Many set selection and ranking algorithms have recently been enhanced with diversity constraints that aim to explicitly increase representation of historically disadvantaged populations, or to improve the overall representativeness of the selected set.

Diversity Fairness

Computational Social Choice Meets Databases

no code implementations10 May 2018 Benny Kimelfeld, Phokion G. Kolaitis, Julia Stoyanovich

At the conceptual level, we give rigorous semantics to queries in this framework by introducing the notions of necessary answers and possible answers to queries.


Cannot find the paper you are looking for? You can Submit a new open access paper.