Search Results for author: Jilin Chen

Found 27 papers, 2 papers with code

Controlled Decoding from Language Models

no code implementations25 Oct 2023 Sidharth Mudgal, Jong Lee, Harish Ganapathy, Yaguang Li, Tao Wang, Yanping Huang, Zhifeng Chen, Heng-Tze Cheng, Michael Collins, Trevor Strohman, Jilin Chen, Alex Beutel, Ahmad Beirami

We propose controlled decoding (CD), a novel off-policy reinforcement learning method to control the autoregressive generation from language models towards high reward outcomes.

Multi-Objective Reinforcement Learning reinforcement-learning

Break it, Imitate it, Fix it: Robustness by Generating Human-Like Attacks

no code implementations25 Oct 2023 Aradhana Sinha, Ananth Balashankar, Ahmad Beirami, Thi Avrahami, Jilin Chen, Alex Beutel

We demonstrate the advantages of this system on the ANLI and hate speech detection benchmark datasets - both collected via an iterative, adversarial human-and-model-in-the-loop procedure.

Hate Speech Detection

Improving Few-shot Generalization of Safety Classifiers via Data Augmented Parameter-Efficient Fine-Tuning

no code implementations25 Oct 2023 Ananth Balashankar, Xiao Ma, Aradhana Sinha, Ahmad Beirami, Yao Qin, Jilin Chen, Alex Beutel

As large language models (LLMs) are widely adopted, new safety issues and policies emerge, to which existing safety classifiers do not generalize well.

Data Augmentation Few-Shot Learning +1

Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting

no code implementations25 Oct 2023 Preethi Lahoti, Nicholas Blumm, Xiao Ma, Raghavendra Kotikalapudi, Sahitya Potluri, Qijun Tan, Hansa Srinivasan, Ben Packer, Ahmad Beirami, Alex Beutel, Jilin Chen

A crucial challenge for generative large language models (LLMs) is diversity: when a user's prompt is under-specified, models may follow implicit assumptions while generating a response, which may result in homogenization of the responses, as well as certain demographic groups being under-represented or even erased from the generated responses.

Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification

no code implementations11 Jul 2023 James Atwood, Tina Tian, Ben Packer, Meghana Deodhar, Jilin Chen, Alex Beutel, Flavien Prost, Ahmad Beirami

Despite the rich literature on machine learning fairness, relatively little attention has been paid to remediating complex systems, where the final prediction is the combination of multiple classifiers and where multiple groups are present.

Fairness

Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral Reasoning

no code implementations25 Jun 2023 Xiao Ma, Swaroop Mishra, Ahmad Beirami, Alex Beutel, Jilin Chen

Language models still struggle on moral reasoning, despite their impressive performance in many other tasks.

counterfactual Math +2

Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals

no code implementations22 May 2023 Ananth Balashankar, Xuezhi Wang, Yao Qin, Ben Packer, Nithum Thain, Jilin Chen, Ed H. Chi, Alex Beutel

We demonstrate that with a small amount of human-annotated counterfactual data (10%), we can generate a counterfactual augmentation dataset with learned labels, that provides an 18-20% improvement in robustness and a 14-21% reduction in errors on 6 out-of-domain datasets, comparable to that of a fully human-annotated counterfactual dataset for both sentiment classification and question paraphrase tasks.

counterfactual Data Augmentation +2

Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers

no code implementations28 Oct 2022 Jieyu Zhao, Xuezhi Wang, Yao Qin, Jilin Chen, Kai-Wei Chang

Large pre-trained language models have shown remarkable performance over the past few years.

A Human-ML Collaboration Framework for Improving Video Content Reviews

no code implementations18 Oct 2022 Meghana Deodhar, Xiao Ma, Yixin Cai, Alex Koes, Alex Beutel, Jilin Chen

We deal with the problem of localized in-video taxonomic human annotation in the video content moderation domain, where the goal is to identify video segments that violate granular policies, e. g., community guidelines on an online video platform.

Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations

no code implementations14 Oct 2022 Flavien Prost, Ben Packer, Jilin Chen, Li Wei, Pierre Kremp, Nicholas Blumm, Susan Wang, Tulsee Doshi, Tonia Osadebe, Lukasz Heldt, Ed H. Chi, Alex Beutel

We reconcile these notions and show that the tension is due to differences in distributions of users where items are relevant, and break down the important factors of the user's recommendations.

Fairness Recommendation Systems

Flexible text generation for counterfactual fairness probing

no code implementations NAACL (WOAH) 2022 Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster

A common approach for testing fairness issues in text-based classifiers is through the use of counterfactuals: does the classifier output change if a sensitive attribute in the input is changed?

counterfactual Fairness +1

Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning

no code implementations4 Jun 2021 Yuyan Wang, Xuezhi Wang, Alex Beutel, Flavien Prost, Jilin Chen, Ed H. Chi

This presents a multi-dimensional Pareto frontier on (1) the trade-off between group fairness and accuracy with respect to each task, as well as (2) the trade-offs across multiple tasks.

Fairness Multi-Task Learning

Measuring Recommender System Effects with Simulated Users

no code implementations12 Jan 2021 Sirui Yao, Yoni Halpern, Nithum Thain, Xuezhi Wang, Kang Lee, Flavien Prost, Ed H. Chi, Jilin Chen, Alex Beutel

Using this simulation framework, we can (a) isolate the effect of the recommender system from the user preferences, and (b) examine how the system performs not just on average for an "average user" but also the extreme experiences under atypical user behavior.

Collaborative Filtering Recommendation Systems

CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation

no code implementations EMNLP 2020 Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex Beutel, Ed Chi

Experiments on real-world NLP datasets demonstrate that our method can generate more diverse and fluent adversarial texts, compared to many existing adversarial text generation approaches.

Adversarial Text Sentiment Analysis +2

Fairness without Demographics through Adversarially Reweighted Learning

5 code implementations NeurIPS 2020 Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed H. Chi

Much of the previous machine learning (ML) fairness literature assumes that protected features such as race and sex are present in the dataset, and relies upon them to mitigate fairness concerns.

Fairness

Practical Compositional Fairness: Understanding Fairness in Multi-Component Recommender Systems

no code implementations5 Nov 2019 Xuezhi Wang, Nithum Thain, Anu Sinha, Flavien Prost, Ed H. Chi, Jilin Chen, Alex Beutel

In addition to the theoretical results, we find on multiple datasets -- including a large-scale real-world recommender system -- that the overall system's end-to-end fairness is largely achievable by improving fairness in individual components.

Fairness Recommendation Systems

Toward a better trade-off between performance and fairness with kernel-based distribution matching

no code implementations25 Oct 2019 Flavien Prost, Hai Qian, Qiuwen Chen, Ed H. Chi, Jilin Chen, Alex Beutel

As recent literature has demonstrated how classifiers often carry unintended biases toward some subgroups, deploying machine learned models to users demands careful consideration of the social consequences.

Fairness

Recommending what video to watch next: a multitask ranking system

no code implementations RecSys 2019 Zhe Zhao, Lichan Hong, Li Wei, Jilin Chen, Aniruddh Nath, Shawn Andrews, Aditee Kumthekar, Maheswaran Sathiamoorthy, Xinyang Yi, Ed Chi

In this paper, we introduce a large scale multi-objective ranking system for recommending what video to watch next on an industrial video sharing platform.

Transfer of Machine Learning Fairness across Domains

no code implementations24 Jun 2019 Candice Schumann, Xuezhi Wang, Alex Beutel, Jilin Chen, Hai Qian, Ed H. Chi

A model trained for one setting may be picked up and used in many others, particularly as is common with pre-training and cloud APIs.

BIG-bench Machine Learning Domain Adaptation +1

Fairness in Recommendation Ranking through Pairwise Comparisons

no code implementations2 Mar 2019 Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Li Wei, Yi Wu, Lukasz Heldt, Zhe Zhao, Lichan Hong, Ed H. Chi, Cristos Goodrow

Recommender systems are one of the most pervasive applications of machine learning in industry, with many services using them to match users to products or information.

Fairness Recommendation Systems

Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements

no code implementations14 Jan 2019 Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, Ed H. Chi

In this paper we provide a case-study on the application of fairness in machine learning research to a production classification system, and offer new insights in how to measure and address algorithmic fairness issues.

BIG-bench Machine Learning Fairness

Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts

9 code implementations19 Jul 2018 Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, Ed Chi

In this work, we propose a novel multi-task learning approach, Multi-gate Mixture-of-Experts (MMoE), which explicitly learns to model task relationships from data.

Binary Classification Click-Through Rate Prediction +2

Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations

no code implementations1 Jul 2017 Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi

How can we learn a classifier that is "fair" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group?

Fairness Recommendation Systems

Why Are You More Engaged? Predicting Social Engagement from Word Use

no code implementations26 Feb 2014 Jalal Mahmud, Jilin Chen, Jeffrey Nichols

We present a study to analyze how word use can predict social engagement behaviors such as replies and retweets in Twitter.

Cannot find the paper you are looking for? You can Submit a new open access paper.