Search Results for author: Xuezhi Wang

Found 28 papers, 6 papers with code

Can We Improve Model Robustness through Secondary Attribute Counterfactuals?

no code implementations EMNLP 2021 Ananth Balashankar, Xuezhi Wang, Ben Packer, Nithum Thain, Ed Chi, Alex Beutel

By implementing RDI in the context of toxicity detection, we find that accounting for secondary attributes can significantly improve robustness, with improvements in sliced accuracy on the original dataset up to 7% compared to existing robustness methods.

Coreference Resolution Data Augmentation +1

Rationale-Augmented Ensembles in Language Models

no code implementations2 Jul 2022 Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou

Recent research has shown that rationales, or step-by-step chains of thought, can be used to improve performance in multi-step reasoning tasks.

Natural Language Processing Prompt Engineering +3

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

no code implementations21 May 2022 Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, Ed Chi

We propose a novel prompting strategy, least-to-most prompting, that enables large language models to better perform multi-step reasoning tasks.

Self-Consistency Improves Chain of Thought Reasoning in Language Models

no code implementations21 Mar 2022 Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou

We explore a simple ensemble strategy, self-consistency, that significantly improves the reasoning accuracy of large language models.

Ranked #2 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Arithmetic Reasoning +2

Continual Sequence Generation with Adaptive Compositional Modules

1 code implementation ACL 2022 Yanzhe Zhang, Xuezhi Wang, Diyi Yang

Continual learning is essential for real-world deployment when there is a need to quickly adapt the model to new tasks without forgetting knowledge of old tasks.

Continual Learning Transfer Learning

Chain of Thought Prompting Elicits Reasoning in Large Language Models

no code implementations28 Jan 2022 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou

We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning.

Arithmetic Language Modelling

Measure and Improve Robustness in NLP Models: A Survey

no code implementations NAACL 2022 Xuezhi Wang, Haohan Wang, Diyi Yang

Despite robustness being an increasingly studied topic, it has been separately explored in applications like vision and NLP, with various definitions, evaluation and mitigation strategies in multiple lines of research.

Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation

no code implementations15 Oct 2021 Yao Qin, Chiyuan Zhang, Ting Chen, Balaji Lakshminarayanan, Alex Beutel, Xuezhi Wang

We show that patch-based negative augmentation consistently improves robustness of ViTs across a wide set of ImageNet based robustness benchmarks.

Data Augmentation

Identifying and Mitigating Spurious Correlations for Improving Robustness in NLP Models

1 code implementation Findings (NAACL) 2022 Tianlu Wang, Rohit Sridhar, Diyi Yang, Xuezhi Wang

Recently, NLP models have achieved remarkable progress across a variety of tasks; however, they have also been criticized for being not robust.

Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning

no code implementations4 Jun 2021 Yuyan Wang, Xuezhi Wang, Alex Beutel, Flavien Prost, Jilin Chen, Ed H. Chi

This presents a multi-dimensional Pareto frontier on (1) the trade-off between group fairness and accuracy with respect to each task, as well as (2) the trade-offs across multiple tasks.

Fairness Multi-Task Learning

TWIST-GAN: Towards Wavelet Transform and Transferred GAN for Spatio-Temporal Single Image Super Resolution

no code implementations20 Apr 2021 Fayaz Ali Dharejo, Farah Deeba, Yuanchun Zhou, Bhagwan Das, Munsif Ali Jatoi, Muhammad Zawish, Yi Du, Xuezhi Wang

We propose a frequency domain-based spatio-temporal remote sensingsingle image super-resolution technique to reconstruct the HR image combined with generative adversarialnetworks (GANs) on various frequency bands (TWIST-GAN).

Image Super-Resolution Single Image Super Resolution

Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information

no code implementations16 Feb 2021 Pranjal Awasthi, Alex Beutel, Matthaeus Kleindessner, Jamie Morgenstern, Xuezhi Wang

An alternate approach that is commonly used is to separately train an attribute classifier on data with sensitive attribute information, and then use it later in the ML pipeline to evaluate the bias of a given classifier.

BIG-bench Machine Learning Fairness +1

Measuring Recommender System Effects with Simulated Users

no code implementations12 Jan 2021 Sirui Yao, Yoni Halpern, Nithum Thain, Xuezhi Wang, Kang Lee, Flavien Prost, Ed H. Chi, Jilin Chen, Alex Beutel

Using this simulation framework, we can (a) isolate the effect of the recommender system from the user preferences, and (b) examine how the system performs not just on average for an "average user" but also the extreme experiences under atypical user behavior.

Collaborative Filtering Recommendation Systems

What are effective labels for augmented data? Improving robustness with AutoLabel

no code implementations1 Jan 2021 Yao Qin, Xuezhi Wang, Balaji Lakshminarayanan, Ed Chi, Alex Beutel

Despite this, most existing work simply reuses the original label from the clean data, and the choice of label accompanying the augmented data is relatively less explored.

Adversarial Robustness Data Augmentation

CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation

no code implementations EMNLP 2020 Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex Beutel, Ed Chi

Experiments on real-world NLP datasets demonstrate that our method can generate more diverse and fluent adversarial texts, compared to many existing adversarial text generation approaches.

Adversarial Text Sentiment Analysis +1

Improving Calibration through the Relationship with Adversarial Robustness

no code implementations NeurIPS 2021 Yao Qin, Xuezhi Wang, Alex Beutel, Ed H. Chi

To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and calibration into training by adaptively softening labels for an example based on how easily it can be attacked by an adversary.

Adversarial Robustness

Fairness without Demographics through Adversarially Reweighted Learning

3 code implementations NeurIPS 2020 Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed H. Chi

Much of the previous machine learning (ML) fairness literature assumes that protected features such as race and sex are present in the dataset, and relies upon them to mitigate fairness concerns.

Fairness

ToTTo: A Controlled Table-To-Text Generation Dataset

1 code implementation EMNLP 2020 Ankur P. Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, Dipanjan Das

We present ToTTo, an open-domain English table-to-text dataset with over 120, 000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.

Conditional Text Generation Data-to-Text Generation +1

Practical Compositional Fairness: Understanding Fairness in Multi-Component Recommender Systems

no code implementations5 Nov 2019 Xuezhi Wang, Nithum Thain, Anu Sinha, Flavien Prost, Ed H. Chi, Jilin Chen, Alex Beutel

In addition to the theoretical results, we find on multiple datasets -- including a large-scale real-world recommender system -- that the overall system's end-to-end fairness is largely achievable by improving fairness in individual components.

Fairness Recommendation Systems

Transfer of Machine Learning Fairness across Domains

no code implementations24 Jun 2019 Candice Schumann, Xuezhi Wang, Alex Beutel, Jilin Chen, Hai Qian, Ed H. Chi

A model trained for one setting may be picked up and used in many others, particularly as is common with pre-training and cloud APIs.

BIG-bench Machine Learning Domain Adaptation +1

Maximum Likelihood Estimation for Single Linkage Hierarchical Clustering

no code implementations25 Nov 2015 Dekang Zhu, Dan P. Guralnik, Xuezhi Wang, Xiang Li, Bill Moran

We derive a statistical model for estimation of a dendrogram from single linkage hierarchical clustering (SLHC) that takes account of uncertainty through noise or corruption in the measurements of separation of data.

Small Data Image Classification

Statistical Properties of the Single Linkage Hierarchical Clustering Estimator

no code implementations24 Nov 2015 Dekang Zhu, Dan P. Guralnik, Xuezhi Wang, Xiang Li, Bill Moran

Distance-based hierarchical clustering (HC) methods are widely used in unsupervised data analysis but few authors take account of uncertainty in the distance data.

Flexible Transfer Learning under Support and Model Shift

no code implementations NeurIPS 2014 Xuezhi Wang, Jeff Schneider

Similarly, work on target/conditional shift focuses on matching marginal distributions on labels $Y$ and adjusting conditional distributions $P(X|Y)$, such that $P(X)$ can be matched across domains.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.