Search Results for author: Flavien Prost

Found 7 papers, 1 papers with code

Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning

no code implementations4 Jun 2021 Yuyan Wang, Xuezhi Wang, Alex Beutel, Flavien Prost, Jilin Chen, Ed H. Chi

This presents a multi-dimensional Pareto frontier on (1) the trade-off between group fairness and accuracy with respect to each task, as well as (2) the trade-offs across multiple tasks.

Fairness Multi-Task Learning

Measuring Recommender System Effects with Simulated Users

no code implementations12 Jan 2021 Sirui Yao, Yoni Halpern, Nithum Thain, Xuezhi Wang, Kang Lee, Flavien Prost, Ed H. Chi, Jilin Chen, Alex Beutel

Using this simulation framework, we can (a) isolate the effect of the recommender system from the user preferences, and (b) examine how the system performs not just on average for an "average user" but also the extreme experiences under atypical user behavior.

Collaborative Filtering Recommendation Systems

Fairness without Demographics through Adversarially Reweighted Learning

3 code implementations NeurIPS 2020 Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed H. Chi

Much of the previous machine learning (ML) fairness literature assumes that protected features such as race and sex are present in the dataset, and relies upon them to mitigate fairness concerns.

Fairness

Practical Compositional Fairness: Understanding Fairness in Multi-Component Recommender Systems

no code implementations5 Nov 2019 Xuezhi Wang, Nithum Thain, Anu Sinha, Flavien Prost, Ed H. Chi, Jilin Chen, Alex Beutel

In addition to the theoretical results, we find on multiple datasets -- including a large-scale real-world recommender system -- that the overall system's end-to-end fairness is largely achievable by improving fairness in individual components.

Fairness Recommendation Systems

Toward a better trade-off between performance and fairness with kernel-based distribution matching

no code implementations25 Oct 2019 Flavien Prost, Hai Qian, Qiuwen Chen, Ed H. Chi, Jilin Chen, Alex Beutel

As recent literature has demonstrated how classifiers often carry unintended biases toward some subgroups, deploying machine learned models to users demands careful consideration of the social consequences.

Fairness

Debiasing Embeddings for Reduced Gender Bias in Text Classification

no code implementations WS 2019 Flavien Prost, Nithum Thain, Tolga Bolukbasi

(Bolukbasi et al., 2016) demonstrated that pretrained word embeddings can inherit gender bias from the data they were trained on.

Classification General Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.