Search Results for author: Eric Wallace

Found 30 papers, 18 papers with code

Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers

no code implementations ICML 2020 Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joseph Gonzalez

Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference.

Machine Translation Quantization +1

Automated Crossword Solving

1 code implementation ACL 2022 Eric Wallace, Nicholas Tomlin, Albert Xu, Kevin Yang, Eshaan Pathak, Matthew Ginsberg, Dan Klein

We present the Berkeley Crossword Solver, a state-of-the-art approach for automatically solving crossword puzzles.

Question Answering

InCoder: A Generative Model for Code Infilling and Synthesis

2 code implementations12 Apr 2022 Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis

Our model is the first generative model that is able to directly perform zero-shot code infilling, which we evaluate on challenging tasks such as type inference, comment generation, and variable re-naming.

Program Synthesis

Deduplicating Training Data Mitigates Privacy Risks in Language Models

no code implementations14 Feb 2022 Nikhil Kandpal, Eric Wallace, Colin Raffel

Past work has shown that large language models are susceptible to privacy attacks, where adversaries generate sequences from a trained model and detect which sequences are memorized from the training set.

Analyzing Dynamic Adversarial Training Data in the Limit

1 code implementation Findings (ACL) 2022 Eric Wallace, Adina Williams, Robin Jia, Douwe Kiela

To create models that are robust across a wide range of test inputs, training datasets should include diverse examples that span numerous phenomena.

Calibrate Before Use: Improving Few-Shot Performance of Language Models

2 code implementations19 Feb 2021 Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh

We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the training examples can cause accuracy to vary from near chance to near state-of-the-art.

Few-Shot Learning

Extracting Training Data from Large Language Models

3 code implementations14 Dec 2020 Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel

We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data.

Language Modelling

Interpreting Predictions of NLP Models

no code implementations EMNLP 2020 Eric Wallace, Matt Gardner, Sameer Singh

Although neural NLP models are highly expressive and empirically successful, they also systematically fail in counterintuitive ways and are opaque in their decision-making process.

Decision Making

Concealed Data Poisoning Attacks on NLP Models

no code implementations NAACL 2021 Eric Wallace, Tony Z. Zhao, Shi Feng, Sameer Singh

In this work, we develop a new data poisoning attack that allows an adversary to control model predictions whenever a desired trigger phrase is present in the input.

Data Poisoning Language Modelling +2

Gradient-based Analysis of NLP Models is Manipulable

no code implementations Findings of the Association for Computational Linguistics 2020 Junlin Wang, Jens Tuyls, Eric Wallace, Sameer Singh

Gradient-based analysis methods, such as saliency map visualizations and adversarial input perturbations, have found widespread use in interpreting neural NLP models due to their simplicity, flexibility, and most importantly, their faithfulness.

Text Classification

Evaluating NLP Models via Contrast Sets

no code implementations1 Oct 2020 Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou

Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.

Reading Comprehension Sentiment Analysis

Imitation Attacks and Defenses for Black-box Machine Translation Systems

1 code implementation EMNLP 2020 Eric Wallace, Mitchell Stern, Dawn Song

To mitigate these vulnerabilities, we propose a defense that modifies translation outputs in order to misdirect the optimization of imitation models.

Machine Translation Translation

Pretrained Transformers Improve Out-of-Distribution Robustness

1 code implementation ACL 2020 Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, Dawn Song

Although pretrained Transformers such as BERT achieve high accuracy on in-distribution examples, do they generalize to new distributions?

Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers

2 code implementations26 Feb 2020 Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joseph E. Gonzalez

Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference.

Machine Translation Quantization +1

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

1 code implementation IJCNLP 2019 Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh

Neural NLP models are increasingly accurate but are imperfect and opaque---they break in counterintuitive ways and leave end users puzzled at their behavior.

Language Modelling Masked Language Modeling +1

Do NLP Models Know Numbers? Probing Numeracy in Embeddings

1 code implementation IJCNLP 2019 Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, Matt Gardner

The ability to understand and work with numbers (numeracy) is critical for many complex reasoning tasks.

Question Answering

Universal Adversarial Triggers for Attacking and Analyzing NLP

1 code implementation IJCNLP 2019 Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh

We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset.

Language Modelling Reading Comprehension

Misleading Failures of Partial-input Baselines

no code implementations ACL 2019 Shi Feng, Eric Wallace, Jordan Boyd-Graber

Recent work establishes dataset difficulty and removes annotation artifacts via partial-input baselines (e. g., hypothesis-only models for SNLI or question-only models for VQA).

Natural Language Inference Visual Question Answering +1

Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation

1 code implementation1 Feb 2019 Sahil Singla, Eric Wallace, Shi Feng, Soheil Feizi

Second, we compute the importance of group-features in deep learning interpretation by introducing a sparsity regularization term.

Feature Importance General Classification

Pathologies of Neural Models Make Interpretations Difficult

no code implementations EMNLP 2018 Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, Jordan Boyd-Graber

In existing interpretation methods for NLP, a word's importance is determined by either input perturbation---measuring the decrease in model confidence when that word is removed---or by the gradient with respect to that word.

Cannot find the paper you are looking for? You can Submit a new open access paper.