Search Results for author: Percy Liang

Found 136 papers, 103 papers with code

Overparameterization hurts worst-group accuracy with spurious correlations

no code implementations ICML 2020 Shiori Sagawa, aditi raghunathan, Pang Wei Koh, Percy Liang

Increasing model capacity well beyond the point of zero training error has been observed to improve average test accuracy.

Large Language Models Can Be Strong Differentially Private Learners

no code implementations12 Oct 2021 Xuechen Li, Florian Tramèr, Percy Liang, Tatsunori Hashimoto

Differentially Private (DP) learning has seen limited success for building large deep learning models of text, and attempts at straightforwardly applying Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have resulted in large performance drops and high computational overhead.

Conditional probing: measuring usable information beyond a baseline

1 code implementation19 Sep 2021 John Hewitt, Kawin Ethayarajh, Percy Liang, Christopher D. Manning

Probing experiments investigate the extent to which neural representations make properties -- like part-of-speech -- predictable.

Word Embeddings

LM-Critic: Language Models for Unsupervised Grammatical Error Correction

1 code implementation14 Sep 2021 Michihiro Yasunaga, Jure Leskovec, Percy Liang

Training a model for grammatical error correction (GEC) requires a set of labeled ungrammatical / grammatical sentence pairs, but manually annotating such pairs can be expensive.

Grammatical Error Correction Language Modelling

On the Opportunities and Risks of Foundation Models

1 code implementation16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Kohd, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

Just Train Twice: Improving Group Robustness without Training Group Information

1 code implementation19 Jul 2021 Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, aditi raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, Chelsea Finn

Standard training via empirical risk minimization (ERM) can produce models that achieve high accuracy on average but low accuracy on certain groups, especially in the presence of spurious correlations between the input and label.

Image Classification

Codified audio language modeling learns useful representations for music information retrieval

1 code implementation12 Jul 2021 Rodrigo Castellon, Chris Donahue, Percy Liang

Relative to representations from conventional MIR models which are pre-trained on tagging, we find that using representations from Jukebox as input features yields 30% stronger performance on average across four MIR tasks: tagging, genre classification, emotion recognition, and key detection.

Emotion Recognition Genre classification +7

Break-It-Fix-It: Unsupervised Learning for Program Repair

1 code implementation11 Jun 2021 Michihiro Yasunaga, Percy Liang

To bridge this gap, we propose a new training approach, Break-It-Fix-It (BIFI), which has two key ideas: (i) we use the critic to check a fixer's output on real bad inputs and add good (fixed) outputs to the training data, and (ii) we train a breaker to generate realistic bad code from good code.

Code Repair Data Augmentation +3

Swords: A Benchmark for Lexical Substitution with Improved Data Coverage and Quality

1 code implementation NAACL 2021 Mina Lee, Chris Donahue, Robin Jia, Alexander Iyabor, Percy Liang

We release a new benchmark for lexical substitution, the task of finding appropriate substitutes for a target word in a context.

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering

2 code implementations NAACL 2021 Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, Jure Leskovec

The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.

Common Sense Reasoning Graph Representation Learning +4

Can Small and Synthetic Benchmarks Drive Modeling Innovation? A Retrospective Study of Question Answering Modeling Approaches

no code implementations1 Feb 2021 Nelson F. Liu, Tony Lee, Robin Jia, Percy Liang

While large, natural datasets are necessary for training accurate systems, are they necessary for driving modeling innovation?

Question Answering

Prefix-Tuning: Optimizing Continuous Prompts for Generation

1 code implementation ACL 2021 Xiang Lisa Li, Percy Liang

Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks.

Language Modelling Table-to-Text Generation

In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness

1 code implementation ICLR 2021 Sang Michael Xie, Ananya Kumar, Robbie Jones, Fereshte Khani, Tengyu Ma, Percy Liang

To get the best of both worlds, we introduce In-N-Out, which first trains a model with auxiliary inputs and uses it to pseudolabel all the in-distribution inputs, then pre-trains a model on OOD auxiliary outputs and fine-tunes this model with the pseudolabels (self-training).

Time Series Unsupervised Domain Adaptation

Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately

1 code implementation7 Dec 2020 Fereshte Khani, Percy Liang

The presence of spurious features interferes with the goal of obtaining robust models that perform well across many groups within the population.

Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases

1 code implementation16 Nov 2020 Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, Yu Su

To facilitate the development of KBQA models with stronger generalization, we construct and release a new large-scale, high-quality dataset with 64, 331 questions, GrailQA, and provide evaluation settings for all three levels of generalization.

Question Answering

Selective Classification Can Magnify Disparities Across Groups

no code implementations ICLR 2021 Erik Jones, Shiori Sagawa, Pang Wei Koh, Ananya Kumar, Percy Liang

In this paper, we find that while selective classification can improve average accuracies, it can simultaneously magnify existing accuracy disparities between various groups within a population, especially in the presence of spurious correlations.

Classification General Classification

Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming

2 code implementations NeurIPS 2020 Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, aditi raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy Liang, Pushmeet Kohli

In this work, we propose a first-order dual SDP algorithm that (1) requires memory only linear in the total number of network activations, (2) only requires a fixed number of forward/backward passes through the network per iteration.

The EOS Decision and Length Extrapolation

1 code implementation14 Oct 2020 Benjamin Newman, John Hewitt, Percy Liang, Christopher D. Manning

Extrapolation to unseen sequence lengths is a challenge for neural generative models of language.

Learning Adaptive Language Interfaces through Decomposition

no code implementations11 Oct 2020 Siddharth Karamcheti, Dorsa Sadigh, Percy Liang

Our goal is to create an interactive natural language interface that efficiently and reliably learns from users to complete tasks in simulated robotics settings.

Semantic Parsing

On the Importance of Adaptive Data Collection for Extremely Imbalanced Pairwise Tasks

1 code implementation Findings of the Association for Computational Linguistics 2020 Stephen Mussmann, Robin Jia, Percy Liang

Many pairwise classification tasks, such as paraphrase detection and open-domain question answering, naturally have extreme label imbalance (e. g., $99. 99\%$ of examples are negatives).

Active Learning Open-Domain Question Answering

Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices

2 code implementations6 Aug 2020 Evan Zheran Liu, aditi raghunathan, Percy Liang, Chelsea Finn

Learning a new task often requires both exploring to gather task-relevant information and exploiting this information to solve the task.

Meta Reinforcement Learning Visual Navigation

Robustness to Spurious Correlations via Human Annotations

1 code implementation ICML 2020 Megha Srivastava, Tatsunori Hashimoto, Percy Liang

The reliability of machine learning systems critically assumes that the associations between features and labels remain similar between training and test distributions.

Common Sense Reasoning

Learning Abstract Models for Strategic Exploration and Fast Reward Transfer

1 code implementation12 Jul 2020 Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang

Model-based reinforcement learning (RL) is appealing because (i) it enables planning and thus more strategic exploration, and (ii) by decoupling dynamics from rewards, it enables fast transfer to new reward functions.

Model-based Reinforcement Learning Montezuma's Revenge

Concept Bottleneck Models

2 code implementations ICML 2020 Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, Percy Liang

We seek to learn models that we can interact with using high-level concepts: if the model did not think there was a bone spur in the x-ray, would it still predict severe arthritis?

Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization

2 code implementations29 Jun 2020 Sang Michael Xie, Tengyu Ma, Percy Liang

Empirically, we show that composed fine-tuning improves over standard fine-tuning on two pseudocode-to-code translation datasets (3% and 6% relative).

Code Translation Denoising +2

Selective Question Answering under Domain Shift

2 code implementations ACL 2020 Amita Kamath, Robin Jia, Percy Liang

In this work, we propose the setting of selective question answering under domain shift, in which a QA model is tested on a mixture of in-domain and out-of-domain data, and must answer (i. e., not abstain on) as many questions as possible while maintaining high accuracy.

Question Answering

Graph-based, Self-Supervised Program Repair from Diagnostic Feedback

2 code implementations ICML 2020 Michihiro Yasunaga, Percy Liang

Second, we present a self-supervised learning paradigm for program repair that leverages unlabeled programs available online to create a large amount of extra program repair examples, which we use to pre-train our models.

Code Generation Graph Learning +2

Enabling Language Models to Fill in the Blanks

3 code implementations ACL 2020 Chris Donahue, Mina Lee, Percy Liang

We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.

Language Modelling Text Infilling

An Investigation of Why Overparameterization Exacerbates Spurious Correlations

2 code implementations9 May 2020 Shiori Sagawa, aditi raghunathan, Pang Wei Koh, Percy Liang

We study why overparameterization -- increasing model size well beyond the point of zero training error -- can hurt test error on minority groups despite improving average test error when there are spurious correlations in the data.

ExpBERT: Representation Engineering with Natural Language Explanations

2 code implementations ACL 2020 Shikhar Murty, Pang Wei Koh, Percy Liang

Suppose we want to specify the inductive bias that married couples typically go on honeymoons for the task of extracting pairs of spouses from text.

Relation Extraction

Distributionally Robust Neural Networks

1 code implementation ICLR 2020 Shiori Sagawa*, Pang Wei Koh*, Tatsunori B. Hashimoto, Percy Liang

Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups.

L2 Regularization Natural Language Inference +1

Understanding Self-Training for Gradual Domain Adaptation

2 code implementations ICML 2020 Ananya Kumar, Tengyu Ma, Percy Liang

Machine learning systems must adapt to data distributions that evolve over time, in applications ranging from sensor networks and self-driving car perception modules to brain-machine interfaces.

Unsupervised Domain Adaptation

Understanding and Mitigating the Tradeoff Between Robustness and Accuracy

1 code implementation ICML 2020 Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, Percy Liang

In this work, we precisely characterize the effect of augmentation on the standard error in linear regression when the optimal linear predictor has zero standard and robust error.

Feature Noise Induces Loss Discrepancy Across Groups

1 code implementation ICML 2020 Fereshte Khani, Percy Liang

Our main result is that even when there is no information deficiency specific to one group (e. g., both groups have infinite data), adding the same amount of feature noise to all individuals leads to loss discrepancy.

Learning Autocomplete Systems as a Communication Game

1 code implementation16 Nov 2019 Mina Lee, Tatsunori B. Hashimoto, Percy Liang

We study textual autocomplete---the task of predicting a full sentence from a partial sentence---as a human-machine communication game.

Shaping Visual Representations with Language for Few-shot Classification

2 code implementations ACL 2020 Jesse Mu, Percy Liang, Noah Goodman

By describing the features and abstractions of our world, language is a crucial tool for human learning and a promising source of supervision for machine learning models.

Classification General Classification +2

Verified Uncertainty Calibration

3 code implementations NeurIPS 2019 Ananya Kumar, Percy Liang, Tengyu Ma

In these experiments, we also estimate the calibration error and ECE more accurately than the commonly used plugin estimators.

Weather Forecasting

Designing and Interpreting Probes with Control Tasks

1 code implementation IJCNLP 2019 John Hewitt, Percy Liang

The selectivity of a probe puts linguistic task accuracy in context with the probe's capacity to memorize from word types.

Part-Of-Speech Tagging

Distributionally Robust Language Modeling

1 code implementation IJCNLP 2019 Yonatan Oren, Shiori Sagawa, Tatsunori B. Hashimoto, Percy Liang

Language models are generally trained on data spanning a wide range of topics (e. g., news, reviews, fiction), but they might be applied to an a priori unknown target distribution (e. g., restaurant reviews).

Language Modelling

Selection via Proxy: Efficient Data Selection for Deep Learning

1 code implementation ICLR 2020 Cody Coleman, Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, Matei Zaharia

By removing hidden layers from the target model, using smaller architectures, and training for fewer epochs, we create proxies that are an order of magnitude faster to train.

Active Learning

A Tight Analysis of Greedy Yields Subexponential Time Approximation for Uniform Decision Tree

no code implementations26 Jun 2019 Ray Li, Percy Liang, Stephen Mussmann

The greedy algorithm's $O(\log n)$ approximation ratio was the best known, but the largest approximation ratio known to be NP-hard is $4-\varepsilon$.

Active Learning

Adversarial Training Can Hurt Generalization

no code implementations14 Jun 2019 Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C. Duchi, Percy Liang

While adversarial training can improve robust accuracy (against an adversary), it sometimes hurts standard accuracy (when there is no adversary).

SPoC: Search-based Pseudocode to Code

no code implementations NeurIPS 2019 Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, Percy Liang

Given test cases as a mechanism to validate programs, we search over the space of possible translations of the pseudocode to find a program that passes the validation.

Program Synthesis Translation

Maximum Weighted Loss Discrepancy

1 code implementation8 Jun 2019 Fereshte Khani, aditi raghunathan, Percy Liang

To capture this inequality, we introduce and study a notion we call maximum weighted loss discrepancy (MWLD), the maximum (weighted) difference between the loss of a group and the loss of the population.

Fairness Generalization Bounds

Unlabeled Data Improves Adversarial Robustness

5 code implementations NeurIPS 2019 Yair Carmon, aditi raghunathan, Ludwig Schmidt, Percy Liang, John C. Duchi

We demonstrate, theoretically and empirically, that adversarial robustness can significantly benefit from semisupervised learning.

Robust classification

Strategies for Pre-training Graph Neural Networks

6 code implementations ICLR 2020 Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, Jure Leskovec

Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training.

Graph Classification Molecular Property Prediction +2

Select Via Proxy: Efficient Data Selection For Training Deep Networks

no code implementations ICLR 2019 Cody Coleman, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, Matei Zaharia

In our approach, we first train a small proxy model quickly, which we then use to estimate the utility of individual training data points, and then select the most informative ones for training the large target model.

Image Classification Language Modelling

Learning Abstract Models for Long-Horizon Exploration

no code implementations ICLR 2019 Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang

In our approach, a manager maintains an abstract MDP over a subset of the abstract states, which grows monotonically through targeted exploration (possible due to the abstract MDP).

Atari Games

Pun Generation with Surprise

2 code implementations NAACL 2019 He He, Nanyun Peng, Percy Liang

We tackle the problem of generating a pun sentence given a pair of homophones (e. g., "died" and "dyed").

Language Modelling Text Generation

Unifying Human and Statistical Evaluation for Natural Language Generation

2 code implementations NAACL 2019 Tatsunori B. Hashimoto, Hugh Zhang, Percy Liang

How can we measure whether a natural language generation system produces both high quality and diverse outputs?

Text Generation

Defending against Whitebox Adversarial Attacks via Randomized Discretization

1 code implementation25 Mar 2019 Yuchen Zhang, Percy Liang

Adversarial perturbations dramatically decrease the accuracy of state-of-the-art image classifiers.

Adversarial Attack General Classification

Uncertainty Sampling is Preconditioned Stochastic Gradient Descent on Zero-One Loss

1 code implementation NeurIPS 2018 Stephen Mussmann, Percy Liang

Uncertainty sampling, a popular active learning algorithm, is used to reduce the amount of data required to learn a classifier, but it has been observed in practice to converge to different parameters depending on the initialization and sometimes to even better parameters than standard training on all the data.

Active Learning

A Retrieve-and-Edit Framework for Predicting Structured Outputs

1 code implementation NeurIPS 2018 Tatsunori B. Hashimoto, Kelvin Guu, Yonatan Oren, Percy Liang

For the task of generating complex outputs such as source code, editing existing outputs can be easier than generating complex outputs from scratch.

FrAngel: Component-Based Synthesis with Control Structures

2 code implementations13 Nov 2018 Kensen Shi, Jacob Steinhardt, Percy Liang

We present FrAngel, a new approach to component-based synthesis that can synthesize short Java functions with control structures when given a desired signature, a set of input-output examples, and a collection of libraries (without formal specifications).

Programming Languages

Semidefinite relaxations for certifying robustness to adversarial examples

1 code implementation NeurIPS 2018 Aditi Raghunathan, Jacob Steinhardt, Percy Liang

One promise of ending the arms race is developing certified defenses, ones which are provably robust against all attackers in some family.

QuAC: Question Answering in Context

no code implementations EMNLP 2018 Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer

We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total).

Information Seeking Question Answering +1

Textual Analogy Parsing: What's Shared and What's Compared among Analogous Facts

2 code implementations EMNLP 2018 Matthew Lamm, Arun Tejasvi Chaganty, Christopher D. Manning, Dan Jurafsky, Percy Liang

To understand a sentence like "whereas only 10% of White Americans live at or below the poverty line, 28% of African Americans do" it is important not only to identify individual facts, e. g., poverty rates of distinct demographic groups, but also the higher-order relations between them, e. g., the disparity between them.

Textual Analogy Parsing

Decoupling Strategy and Generation in Negotiation Dialogues

2 code implementations EMNLP 2018 He He, Derek Chen, Anusha Balakrishnan, Percy Liang

We consider negotiation settings in which two agents use natural language to bargain on goods.

QuAC : Question Answering in Context

no code implementations21 Aug 2018 Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer

We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total).

Information Seeking Question Answering +1

Inferring Multidimensional Rates of Aging from Cross-Sectional Data

1 code implementation12 Jul 2018 Emma Pierson, Pang Wei Koh, Tatsunori Hashimoto, Daphne Koller, Jure Leskovec, Nicholas Eriksson, Percy Liang

Motivated by the study of human aging, we present an interpretable latent-variable model that learns temporal dynamics from cross-sectional data.

Time Series

The price of debiasing automatic metrics in natural language evaluation

1 code implementation6 Jul 2018 Arun Tejasvi Chaganty, Stephen Mussman, Percy Liang

For evaluating generation systems, automatic metrics such as BLEU cost nothing to run but have been shown to correlate poorly with human judgment, leading to systematic bias against certain model improvements.

Question Answering

The price of debiasing automatic metrics in natural language evalaution

no code implementations ACL 2018 Arun Chaganty, Stephen Mussmann, Percy Liang

For evaluating generation systems, automatic metrics such as BLEU cost nothing to run but have been shown to correlate poorly with human judgment, leading to systematic bias against certain model improvements.

Abstractive Text Summarization Image Captioning +1

Fairness Without Demographics in Repeated Loss Minimization

1 code implementation ICML 2018 Tatsunori B. Hashimoto, Megha Srivastava, Hongseok Namkoong, Percy Liang

Machine learning models (e. g., speech recognizers) are usually trained to minimize average loss, which results in representation disparity---minority groups (e. g., non-native speakers) contribute less to the training objective and thus tend to suffer higher loss.

Fairness

On the Relationship between Data Efficiency and Error for Uncertainty Sampling

1 code implementation ICML 2018 Stephen Mussmann, Percy Liang

While active learning offers potential cost savings, the actual data efficiency---the reduction in amount of labeled data needed to obtain the same error rate---observed in practice is mixed.

Active Learning

Know What You Don't Know: Unanswerable Questions for SQuAD

10 code implementations ACL 2018 Pranav Rajpurkar, Robin Jia, Percy Liang

Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context.

Natural Language Understanding Question Answering +1

Planning, Inference and Pragmatics in Sequential Language Games

1 code implementation TACL 2018 Fereshte Khani, Noah D. Goodman, Percy Liang

We study sequential language games in which two players, each with private information, communicate to achieve a common goal.

Training Classifiers with Natural Language Explanations

2 code implementations ACL 2018 Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, Christopher Ré

Training accurate classifiers requires many labels, but each label provides only limited information (one bit for binary classification).

General Classification Relation Extraction

Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer

5 code implementations NAACL 2018 Juncen Li, Robin Jia, He He, Percy Liang

We consider the task of text attribute transfer: transforming a sentence to alter a specific attribute (e. g., sentiment) while preserving its attribute-independent content (e. g., changing "screen is just the right size" to "screen is too small").

Image Captioning Style Transfer +1

Generalized Binary Search For Split-Neighborly Problems

no code implementations27 Feb 2018 Stephen Mussmann, Percy Liang

In sequential hypothesis testing, Generalized Binary Search (GBS) greedily chooses the test with the highest information gain at each step.

Two-sample testing

Reinforcement Learning on Web Interfaces Using Workflow-Guided Exploration

3 code implementations ICLR 2018 Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, Percy Liang

Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates.

Learning a SAT Solver from Single-Bit Supervision

5 code implementations ICLR 2019 Daniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo de Moura, David L. Dill

We present NeuroSAT, a message passing neural network that learns to solve SAT problems after only being trained as a classifier to predict satisfiability.

Certified Defenses against Adversarial Examples

4 code implementations ICLR 2018 Aditi Raghunathan, Jacob Steinhardt, Percy Liang

While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs.

Adversarial Attack Adversarial Defense +1

Learning Overcomplete HMMs

no code implementations NeurIPS 2017 Vatsal Sharan, Sham Kakade, Percy Liang, Gregory Valiant

On the other hand, we show that learning is impossible given only a polynomial number of samples for HMMs with a small output alphabet and whose transition matrices are random regular graphs with large degree.

Unsupervised Transformation Learning via Convex Relaxations

1 code implementation NeurIPS 2017 Tatsunori B. Hashimoto, John C. Duchi, Percy Liang

Our goal is to extract meaningful transformations from raw images, such as varying the thickness of lines in handwriting or the lighting in a portrait.

Generating Sentences by Editing Prototypes

3 code implementations TACL 2018 Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, Percy Liang

We propose a new generative model of sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence.

Language Modelling Sentence Similarity

Importance sampling for unbiased on-demand evaluation of knowledge base population

no code implementations EMNLP 2017 Arun Chaganty, Ashwin Paranjape, Percy Liang, Christopher D. Manning

Our first contribution is a new importance-sampling based evaluation which corrects for this bias by annotating a new system{'}s predictions on-demand via crowdsourcing.

Information Retrieval Knowledge Base Population +1

World of Bits: An Open-Domain Platform for Web-Based Agents

no code implementations ICML 2017 Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, Percy Liang

While simulated game environments have greatly accelerated research in reinforcement learning, existing environments lack the open-domain realism of tasks in computer vision or natural language processing, which operate on artifacts created by humans in natural, organic settings.

Macro Grammars and Holistic Triggering for Efficient Semantic Parsing

2 code implementations EMNLP 2017 Yuchen Zhang, Panupong Pasupat, Percy Liang

To learn a semantic parser from denotations, a learning algorithm must search over a combinatorially large space of logical forms for ones consistent with the annotated denotations.

Semantic Parsing Sentence Similarity

Adversarial Examples for Evaluating Reading Comprehension Systems

2 code implementations EMNLP 2017 Robin Jia, Percy Liang

Standard accuracy metrics indicate that reading comprehension systems are making rapid progress, but the extent to which these systems truly understand language remains unclear.

Accuracy Metrics Question Answering +1

Developing Bug-Free Machine Learning Systems With Formal Mathematics

1 code implementation ICML 2017 Daniel Selsam, Percy Liang, David L. Dill

As a case study, we implement a new system, Certigrad, for optimizing over stochastic computation graphs, and we generate a formal (i. e. machine-checkable) proof that the gradients sampled by the system are unbiased estimates of the true mathematical gradients.

Certified Defenses for Data Poisoning Attacks

1 code implementation NeurIPS 2017 Jacob Steinhardt, Pang Wei Koh, Percy Liang

Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model.

Data Poisoning

From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood

3 code implementations ACL 2017 Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, Percy Liang

Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself.

Semantic Parsing

Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings

2 code implementations ACL 2017 He He, Anusha Balakrishnan, Mihail Eric, Percy Liang

To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses.

Knowledge Graph Embeddings

Naturalizing a Programming Language via Interactive Learning

1 code implementation ACL 2017 Sida I. Wang, Samuel Ginn, Percy Liang, Christoper D. Manning

Our goal is to create a convenient natural language interface for performing well-specified but complex actions such as analyzing data, manipulating text, and querying databases.

A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics

no code implementations18 Feb 2017 Yuchen Zhang, Percy Liang, Moses Charikar

We study the Stochastic Gradient Langevin Dynamics (SGLD) algorithm for non-convex optimization.

Prediction with a Short Memory

no code implementations8 Dec 2016 Vatsal Sharan, Sham Kakade, Percy Liang, Gregory Valiant

For a Hidden Markov Model with $n$ hidden states, $I$ is bounded by $\log n$, a quantity that does not depend on the mixing time, and we show that the trivial prediction algorithm based on the empirical frequencies of length $O(\log n/\epsilon)$ windows of observations achieves this error, provided the length of the sequence is $d^{\Omega(\log n/\epsilon)}$, where $d$ is the size of the observation alphabet.

Convexified Convolutional Neural Networks

1 code implementation ICML 2017 Yuchen Zhang, Percy Liang, Martin J. Wainwright

For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN.

Denoising

How Much is 131 Million Dollars? Putting Numbers in Perspective with Compositional Descriptions

1 code implementation ACL 2016 Arun Tejasvi Chaganty, Percy Liang

We then propose a system to generate these descriptions consisting of two steps: formula construction and description generation.

Estimation from Indirect Supervision with Linear Moments

1 code implementation10 Aug 2016 Aditi Raghunathan, Roy Frostig, John Duchi, Percy Liang

In structured prediction problems where we have indirect supervision of the output, maximum marginal likelihood faces two computational obstacles: non-convexity of the objective and intractability of even a single gradient computation.

Structured Prediction

Synthesizing Program Input Grammars

1 code implementation5 Aug 2016 Osbert Bastani, Rahul Sharma, Alex Aiken, Percy Liang

We present an algorithm for synthesizing a context-free grammar encoding the language of valid program inputs from a set of input examples and blackbox access to the program.

Programming Languages

Inferring Logical Forms From Denotations

2 code implementations ACL 2016 Panupong Pasupat, Percy Liang

A core problem in learning semantic parsers from denotations is picking out consistent logical forms--those that yield the correct denotation--from a combinatorially large space.

Unanimous Prediction for 100% Precision with Application to Learning Semantic Mappings

1 code implementation20 Jun 2016 Fereshte Khani, Martin Rinard, Percy Liang

Specifically, we introduce the unanimity principle: only predict when all models consistent with the training data predict the same output.

Semantic Parsing

Unsupervised Risk Estimation Using Only Conditional Independence Structure

no code implementations NeurIPS 2016 Jacob Steinhardt, Percy Liang

We show how to estimate a model's test error from unlabeled data, on distributions very different from the training distribution, while assuming only that certain conditional independencies are preserved between train and test.

Simpler Context-Dependent Logical Forms via Model Projections

1 code implementation ACL 2016 Reginald Long, Panupong Pasupat, Percy Liang

With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances.

Semantic Parsing

SQuAD: 100,000+ Questions for Machine Comprehension of Text

17 code implementations EMNLP 2016 Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang

We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100, 000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.

Question Answering Reading Comprehension

Data Recombination for Neural Semantic Parsing

1 code implementation ACL 2016 Robin Jia, Percy Liang

Modeling crisp logical regularities is crucial in semantic parsing, making it difficult for neural models with no task-specific prior knowledge to achieve good results.

Semantic Parsing

Learning Language Games through Interaction

3 code implementations ACL 2016 Sida I. Wang, Percy Liang, Christopher D. Manning

We introduce a new language learning setting relevant to building adaptive natural language interfaces.

Semantic Parsing

Estimating Mixture Models via Mixtures of Polynomials

3 code implementations NeurIPS 2015 Sida I. Wang, Arun Tejasvi Chaganty, Percy Liang

This framework allows us to draw insights and apply tools from convex optimization, computer algebra and the theory of moments to study problems in statistical estimation.

Learning Executable Semantic Parsers for Natural Language Understanding

no code implementations22 Mar 2016 Percy Liang

For building question answering systems and natural language interfaces, semantic parsing has emerged as an important and powerful paradigm.

Natural Language Understanding Question Answering +1

Data Augmentation via Levy Processes

1 code implementation21 Mar 2016 Stefan Wager, William Fithian, Percy Liang

The framework imagines data as being drawn from a slice of a Levy process.

Image Augmentation

Compositional Semantic Parsing on Semi-Structured Tables

3 code implementations IJCNLP 2015 Panupong Pasupat, Percy Liang

Two important aspects of semantic parsing for question answering are the breadth of the knowledge source and the depth of logical compositionality.

Question Answering Semantic Parsing

Traversing Knowledge Graphs in Vector Space

1 code implementation EMNLP 2015 Kelvin Guu, John Miller, Percy Liang

Path queries on a knowledge graph can be used to answer compositional questions such as "What languages are spoken by people living in Lisbon?".

Knowledge Base Completion Knowledge Graphs

Learning Where to Sample in Structured Prediction

1 code implementation9 May 2015 Tianlin Shi, Jacob Steinhardt, Percy Liang

In structured prediction, most inference algorithms allocate a homogeneous amount of computation to all parts of the output, which can be wasteful when different parts vary widely in terms of difficulty.

Structured Prediction

Learning Fast-Mixing Models for Structured Prediction

1 code implementation24 Feb 2015 Jacob Steinhardt, Percy Liang

Markov Chain Monte Carlo (MCMC) algorithms are often used for approximate inference inside learning, but their slow mixing can be difficult to diagnose and the approximations can seriously degrade learning.

Structured Prediction

Reified Context Models

1 code implementation24 Feb 2015 Jacob Steinhardt, Percy Liang

A classic tension exists between exact inference in a simple model and approximate inference in a complex model.

Tensor Factorization via Matrix Factorization

1 code implementation29 Jan 2015 Volodymyr Kuleshov, Arun Tejasvi Chaganty, Percy Liang

Tensor factorization arises in many machine learning applications, such knowledge base modeling and parameter estimation in latent variable models.

Latent Variable Models

Imitation Learning of Agenda-based Semantic Parsers

1 code implementation TACL 2015 Jonathan Berant, Percy Liang

Semantic parsers conventionally construct logical forms bottom-up in a fixed order, resulting in the generation of many extraneous partial logical forms.

Imitation Learning Question Answering +1

The Statistics of Streaming Sparse Regression

no code implementations13 Dec 2014 Jacob Steinhardt, Stefan Wager, Percy Liang

We present a sparse analogue to stochastic gradient descent that is guaranteed to perform well under similar conditions to the lasso.

Altitude Training: Strong Bounds for Single-Layer Dropout

no code implementations NeurIPS 2014 Stefan Wager, William Fithian, Sida Wang, Percy Liang

Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks.

Relaxations for inference in restricted Boltzmann machines

no code implementations21 Dec 2013 Sida I. Wang, Roy Frostig, Percy Liang, Christopher D. Manning

We propose a relaxation-based approximate inference algorithm that samples near-MAP configurations of a binary pairwise Markov random field.

Lambda Dependency-Based Compositional Semantics

no code implementations cs.AL 2013 Percy Liang

This short note presents a new formal language, lambda dependency-based compositional semantics (lambda DCS) for representing logical forms in semantic parsing.

Semantic Parsing

Dropout Training as Adaptive Regularization

no code implementations NeurIPS 2013 Stefan Wager, Sida Wang, Percy Liang

Dropout and other feature noising schemes control overfitting by artificially corrupting the training data.

Document Classification

Spectral Experts for Estimating Mixtures of Linear Regressions

no code implementations17 Jun 2013 Arun Tejasvi Chaganty, Percy Liang

Discriminative latent-variable models are typically learned using EM or gradient-based optimization, which suffer from local optima.

Latent Variable Models

Learning Semantic Correspondences with Less Supervision

1 code implementation1 Aug 2009 Percy Liang, Michael Jordan, Dan Klein

A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state.

Language Acquisition

Cannot find the paper you are looking for? You can Submit a new open access paper.