Search Results for author: Alex Beutel

Found 46 papers, 3 papers with code

Fairness without Demographics through Adversarially Reweighted Learning

5 code implementations NeurIPS 2020 Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed H. Chi

Much of the previous machine learning (ML) fairness literature assumes that protected features such as race and sex are present in the dataset, and relies upon them to mitigate fairness concerns.

Fairness

The Case for Learned Index Structures

8 code implementations4 Dec 2017 Tim Kraska, Alex Beutel, Ed H. Chi, Jeffrey Dean, Neoklis Polyzotis

Indexes are models: a B-Tree-Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not.

Management Position

Top-K Off-Policy Correction for a REINFORCE Recommender System

1 code implementation6 Dec 2018 Minmin Chen, Alex Beutel, Paul Covington, Sagar Jain, Francois Belletti, Ed Chi

The contributions of the paper are: (1) scaling REINFORCE to a production recommender system with an action space on the orders of millions; (2) applying off-policy correction to address data biases in learning from logged feedback collected from multiple behavior policies; (3) proposing a novel top-K off-policy correction to account for our policy recommending multiple items at a time; (4) showcasing the value of exploration.

Recommendation Systems

The Many Faces of Link Fraud

no code implementations5 Apr 2017 Neil Shah, Hemank Lamba, Alex Beutel, Christos Faloutsos

Most past work on social network link fraud detection tries to separate genuine users from fraudsters, implicitly assuming that there is only one type of fraudulent behavior.

Fraud Detection

Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations

no code implementations1 Jul 2017 Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi

How can we learn a classifier that is "fair" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group?

Attribute Fairness +1

BIRDNEST: Bayesian Inference for Ratings-Fraud Detection

no code implementations19 Nov 2015 Bryan Hooi, Neil Shah, Alex Beutel, Stephan Gunnemann, Leman Akoglu, Mohit Kumar, Disha Makhija, Christos Faloutsos

To combine these 2 approaches, we formulate our Bayesian Inference for Rating Data (BIRD) model, a flexible Bayesian model of user rating behavior.

Bayesian Inference Fraud Detection

Explaining reviews and ratings with PACO: Poisson Additive Co-Clustering

no code implementations6 Dec 2015 Chao-yuan Wu, Alex Beutel, Amr Ahmed, Alexander J. Smola

With this novel technique we propose a new Bayesian model for joint collaborative filtering of ratings and text reviews through a sum of simple co-clusterings.

Clustering Collaborative Filtering

ACCAMS: Additive Co-Clustering to Approximate Matrices Succinctly

no code implementations31 Dec 2014 Alex Beutel, Amr Ahmed, Alexander J. Smola

Matrix completion and approximation are popular tools to capture a user's preferences for recommendation and to approximate missing data.

Clustering Decision Making +1

Counterfactual Fairness in Text Classification through Robustness

no code implementations27 Sep 2018 Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H. Chi, Alex Beutel

In this paper, we study counterfactual fairness in text classification, which asks the question: How would the prediction change if the sensitive attribute referenced in the example were different?

Attribute counterfactual +4

Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements

no code implementations14 Jan 2019 Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, Ed H. Chi

In this paper we provide a case-study on the application of fairness in machine learning research to a production classification system, and offer new insights in how to measure and address algorithmic fairness issues.

BIG-bench Machine Learning Fairness

Fairness in Recommendation Ranking through Pairwise Comparisons

no code implementations2 Mar 2019 Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Li Wei, Yi Wu, Lukasz Heldt, Zhe Zhao, Lichan Hong, Ed H. Chi, Cristos Goodrow

Recommender systems are one of the most pervasive applications of machine learning in industry, with many services using them to match users to products or information.

Fairness Recommendation Systems

Transfer of Machine Learning Fairness across Domains

no code implementations24 Jun 2019 Candice Schumann, Xuezhi Wang, Alex Beutel, Jilin Chen, Hai Qian, Ed H. Chi

A model trained for one setting may be picked up and used in many others, particularly as is common with pre-training and cloud APIs.

Attribute BIG-bench Machine Learning +2

Toward a better trade-off between performance and fairness with kernel-based distribution matching

no code implementations25 Oct 2019 Flavien Prost, Hai Qian, Qiuwen Chen, Ed H. Chi, Jilin Chen, Alex Beutel

As recent literature has demonstrated how classifiers often carry unintended biases toward some subgroups, deploying machine learned models to users demands careful consideration of the social consequences.

Fairness

Practical Compositional Fairness: Understanding Fairness in Multi-Component Recommender Systems

no code implementations5 Nov 2019 Xuezhi Wang, Nithum Thain, Anu Sinha, Flavien Prost, Ed H. Chi, Jilin Chen, Alex Beutel

In addition to the theoretical results, we find on multiple datasets -- including a large-scale real-world recommender system -- that the overall system's end-to-end fairness is largely achievable by improving fairness in individual components.

Fairness Recommendation Systems

Improving Calibration through the Relationship with Adversarial Robustness

no code implementations NeurIPS 2021 Yao Qin, Xuezhi Wang, Alex Beutel, Ed H. Chi

To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and calibration into training by adaptively softening labels for an example based on how easily it can be attacked by an adversary.

Adversarial Robustness

What are effective labels for augmented data? Improving robustness with AutoLabel

no code implementations1 Jan 2021 Yao Qin, Xuezhi Wang, Balaji Lakshminarayanan, Ed Chi, Alex Beutel

Despite this, most existing work simply reuses the original label from the clean data, and the choice of label accompanying the augmented data is relatively less explored.

Adversarial Robustness Data Augmentation

CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation

no code implementations EMNLP 2020 Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex Beutel, Ed Chi

Experiments on real-world NLP datasets demonstrate that our method can generate more diverse and fluent adversarial texts, compared to many existing adversarial text generation approaches.

Adversarial Text Attribute +3

Learned Indexes for a Google-scale Disk-based Database

no code implementations23 Dec 2020 Hussam Abu-Libdeh, Deniz Altınbüken, Alex Beutel, Ed H. Chi, Lyric Doshi, Tim Kraska, Xiaozhou, Li, Andy Ly, Christopher Olston

There is great excitement about learned index structures, but understandable skepticism about the practicality of a new method uprooting decades of research on B-Trees.

Measuring Recommender System Effects with Simulated Users

no code implementations12 Jan 2021 Sirui Yao, Yoni Halpern, Nithum Thain, Xuezhi Wang, Kang Lee, Flavien Prost, Ed H. Chi, Jilin Chen, Alex Beutel

Using this simulation framework, we can (a) isolate the effect of the recommender system from the user preferences, and (b) examine how the system performs not just on average for an "average user" but also the extreme experiences under atypical user behavior.

Collaborative Filtering Recommendation Systems

Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information

no code implementations16 Feb 2021 Pranjal Awasthi, Alex Beutel, Matthaeus Kleindessner, Jamie Morgenstern, Xuezhi Wang

An alternate approach that is commonly used is to separately train an attribute classifier on data with sensitive attribute information, and then use it later in the ML pipeline to evaluate the bias of a given classifier.

Attribute BIG-bench Machine Learning +2

Towards Content Provider Aware Recommender Systems: A Simulation Study on the Interplay between User and Provider Utilities

no code implementations6 May 2021 Ruohan Zhan, Konstantina Christakopoulou, Ya Le, Jayden Ooi, Martin Mladenov, Alex Beutel, Craig Boutilier, Ed H. Chi, Minmin Chen

We then build a REINFORCE recommender agent, coined EcoAgent, to optimize a joint objective of user utility and the counterfactual utility lift of the provider associated with the recommended content, which we show to be equivalent to maximizing overall user utility and the utilities of all providers on the platform under some mild assumptions.

counterfactual Recommendation Systems

Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning

no code implementations4 Jun 2021 Yuyan Wang, Xuezhi Wang, Alex Beutel, Flavien Prost, Jilin Chen, Ed H. Chi

This presents a multi-dimensional Pareto frontier on (1) the trade-off between group fairness and accuracy with respect to each task, as well as (2) the trade-offs across multiple tasks.

Fairness Multi-Task Learning

Recurrent Recommender Networks

no code implementations WSDM 2017 Chao-yuan Wu, Amr Ahmed, Alex Beutel, Alexander J. Smola, How Jing

Recommender systems traditionally assume that user profiles and movie attributes are static.

Recommendation Systems

Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation

no code implementations15 Oct 2021 Yao Qin, Chiyuan Zhang, Ting Chen, Balaji Lakshminarayanan, Alex Beutel, Xuezhi Wang

We show that patch-based negative augmentation consistently improves robustness of ViTs across a wide set of ImageNet based robustness benchmarks.

Data Augmentation

Can We Improve Model Robustness through Secondary Attribute Counterfactuals?

no code implementations EMNLP 2021 Ananth Balashankar, Xuezhi Wang, Ben Packer, Nithum Thain, Ed Chi, Alex Beutel

By implementing RDI in the context of toxicity detection, we find that accounting for secondary attributes can significantly improve robustness, with improvements in sliced accuracy on the original dataset up to 7% compared to existing robustness methods.

Attribute coreference-resolution +3

Flexible text generation for counterfactual fairness probing

no code implementations NAACL (WOAH) 2022 Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster

A common approach for testing fairness issues in text-based classifiers is through the use of counterfactuals: does the classifier output change if a sensitive attribute in the input is changed?

Attribute counterfactual +2

Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations

no code implementations14 Oct 2022 Flavien Prost, Ben Packer, Jilin Chen, Li Wei, Pierre Kremp, Nicholas Blumm, Susan Wang, Tulsee Doshi, Tonia Osadebe, Lukasz Heldt, Ed H. Chi, Alex Beutel

We reconcile these notions and show that the tension is due to differences in distributions of users where items are relevant, and break down the important factors of the user's recommendations.

Fairness Recommendation Systems

A Human-ML Collaboration Framework for Improving Video Content Reviews

no code implementations18 Oct 2022 Meghana Deodhar, Xiao Ma, Yixin Cai, Alex Koes, Alex Beutel, Jilin Chen

We deal with the problem of localized in-video taxonomic human annotation in the video content moderation domain, where the goal is to identify video segments that violate granular policies, e. g., community guidelines on an online video platform.

Striving for data-model efficiency: Identifying data externalities on group performance

no code implementations11 Nov 2022 Esther Rolf, Ben Packer, Alex Beutel, Fernando Diaz

Building trustworthy, effective, and responsible machine learning systems hinges on understanding how differences in training data and modeling decisions interact to impact predictive performance.

What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel

no code implementations22 Feb 2023 Yao Qin, Xuezhi Wang, Balaji Lakshminarayanan, Ed H. Chi, Alex Beutel

A wide breadth of research has devised data augmentation approaches that can improve both accuracy and generalization performance for neural networks.

Data Augmentation

Towards Robust Prompts on Vision-Language Models

no code implementations17 Apr 2023 Jindong Gu, Ahmad Beirami, Xuezhi Wang, Alex Beutel, Philip Torr, Yao Qin

With the advent of vision-language models (VLMs) that can perform in-context and prompt-based learning, how can we design prompting approaches that robustly generalize to distribution shift and can be used on novel classes outside the support set of the prompts?

In-Context Learning

Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals

no code implementations22 May 2023 Ananth Balashankar, Xuezhi Wang, Yao Qin, Ben Packer, Nithum Thain, Jilin Chen, Ed H. Chi, Alex Beutel

We demonstrate that with a small amount of human-annotated counterfactual data (10%), we can generate a counterfactual augmentation dataset with learned labels, that provides an 18-20% improvement in robustness and a 14-21% reduction in errors on 6 out-of-domain datasets, comparable to that of a fully human-annotated counterfactual dataset for both sentiment classification and question paraphrase tasks.

counterfactual Data Augmentation +2

Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral Reasoning

no code implementations25 Jun 2023 Xiao Ma, Swaroop Mishra, Ahmad Beirami, Alex Beutel, Jilin Chen

Language models still struggle on moral reasoning, despite their impressive performance in many other tasks.

counterfactual Math +2

Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification

no code implementations11 Jul 2023 James Atwood, Tina Tian, Ben Packer, Meghana Deodhar, Jilin Chen, Alex Beutel, Flavien Prost, Ahmad Beirami

Despite the rich literature on machine learning fairness, relatively little attention has been paid to remediating complex systems, where the final prediction is the combination of multiple classifiers and where multiple groups are present.

Fairness

Learning from Negative User Feedback and Measuring Responsiveness for Sequential Recommenders

no code implementations23 Aug 2023 Yueqi Wang, Yoni Halpern, Shuo Chang, Jingchen Feng, Elaine Ya Le, Longfei Li, Xujian Liang, Min-Cheng Huang, Shane Li, Alex Beutel, Yaping Zhang, Shuchao Bi

In this work, we incorporate explicit and implicit negative user feedback into the training objective of sequential recommenders in the retrieval stage using a "not-to-recommend" loss function that optimizes for the log-likelihood of not recommending items with negative feedback.

counterfactual Recommendation Systems +1

Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting

no code implementations25 Oct 2023 Preethi Lahoti, Nicholas Blumm, Xiao Ma, Raghavendra Kotikalapudi, Sahitya Potluri, Qijun Tan, Hansa Srinivasan, Ben Packer, Ahmad Beirami, Alex Beutel, Jilin Chen

A crucial challenge for generative large language models (LLMs) is diversity: when a user's prompt is under-specified, models may follow implicit assumptions while generating a response, which may result in homogenization of the responses, as well as certain demographic groups being under-represented or even erased from the generated responses.

Improving Few-shot Generalization of Safety Classifiers via Data Augmented Parameter-Efficient Fine-Tuning

no code implementations25 Oct 2023 Ananth Balashankar, Xiao Ma, Aradhana Sinha, Ahmad Beirami, Yao Qin, Jilin Chen, Alex Beutel

As large language models (LLMs) are widely adopted, new safety issues and policies emerge, to which existing safety classifiers do not generalize well.

Data Augmentation Few-Shot Learning +1

Break it, Imitate it, Fix it: Robustness by Generating Human-Like Attacks

no code implementations25 Oct 2023 Aradhana Sinha, Ananth Balashankar, Ahmad Beirami, Thi Avrahami, Jilin Chen, Alex Beutel

We demonstrate the advantages of this system on the ANLI and hate speech detection benchmark datasets - both collected via an iterative, adversarial human-and-model-in-the-loop procedure.

Hate Speech Detection

Multi-Group Fairness Evaluation via Conditional Value-at-Risk Testing

no code implementations6 Dec 2023 Lucas Monteiro Paes, Ananda Theertha Suresh, Alex Beutel, Flavio P. Calmon, Ahmad Beirami

Here, the sample complexity for estimating the worst-case performance gap across groups (e. g., the largest difference in error rates) increases exponentially with the number of group-denoting sensitive attributes.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.