Search Results for author: Amanda Bower

Found 9 papers, 3 papers with code

Training individually fair ML models with Sensitive Subspace Robustness

2 code implementations ICLR 2020 Mikhail Yurochkin, Amanda Bower, Yuekai Sun

We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs.

BIG-bench Machine Learning Fairness

De-biasing "bias" measurement

1 code implementation11 May 2022 Kristian Lum, Yunfeng Zhang, Amanda Bower

When a model's performance differs across socially or culturally relevant groups--like race, gender, or the intersections of many such groups--it is often called "biased."

Decision Making Fairness +1

Preference Modeling with Context-Dependent Salient Features

1 code implementation22 Feb 2020 Amanda Bower, Laura Balzano

Finally we demonstrate strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the US.

Fair Pipelines

no code implementations3 Jul 2017 Amanda Bower, Sarah N. Kitchen, Laura Niss, Martin J. Strauss, Alexander Vargas, Suresh Venkatasubramanian

This work facilitates ensuring fairness of machine learning in the real world by decoupling fairness considerations in compound decisions.

BIG-bench Machine Learning Decision Making +1

Preference modelling with context-dependent salient features

no code implementations ICML 2020 Amanda Bower, Laura Balzano

Finally we demonstrate the strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the United States.

Individually Fair Ranking

no code implementations19 Mar 2021 Amanda Bower, Hamid Eftekhari, Mikhail Yurochkin, Yuekai Sun

We develop an algorithm to train individually fair learning-to-rank (LTR) models.

Fairness Learning-To-Rank

Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Metrics

no code implementations3 Feb 2022 Tomo Lazovich, Luca Belli, Aaron Gonzales, Amanda Bower, Uthaipon Tantipongpipat, Kristian Lum, Ferenc Huszar, Rumman Chowdhury

We show that we can use these metrics to identify content suggestion algorithms that contribute more strongly to skewed outcomes between users.

Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender Systems

no code implementations12 Sep 2022 Amanda Bower, Kristian Lum, Tomo Lazovich, Kyra Yee, Luca Belli

Traditionally, recommender systems operate by returning a user a set of items, ranked in order of estimated relevance to that user.

Fairness Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.