Search Results for author: Suchin Gururangan

Found 26 papers, 17 papers with code

Expected Validation Performance and Estimation of a Random Variable’s Maximum

no code implementations Findings (EMNLP) 2021 Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, Noah A. Smith

We find that the two biased estimators lead to the fewest incorrect conclusions, which hints at the importance of minimizing variance and MSE.

LESS: Selecting Influential Data for Targeted Instruction Tuning

1 code implementation6 Feb 2024 Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, Danqi Chen

Instruction tuning has unlocked powerful capabilities in large language models (LLMs), effectively using combined datasets to develop generalpurpose chatbots.

Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models

no code implementations19 Jan 2024 Terra Blevins, Tomasz Limisiewicz, Suchin Gururangan, Margaret Li, Hila Gonen, Noah A. Smith, Luke Zettlemoyer

Despite their popularity in non-English NLP, multilingual language models often underperform monolingual ones due to inter-language competition for model parameters.

Time is Encoded in the Weights of Finetuned Language Models

1 code implementation20 Dec 2023 Kai Nylund, Suchin Gururangan, Noah A. Smith

We present time vectors, a simple tool to customize language models to new time periods.

Language Modelling

SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore

1 code implementation8 Aug 2023 Sewon Min, Suchin Gururangan, Eric Wallace, Hannaneh Hajishirzi, Noah A. Smith, Luke Zettlemoyer

SILO is built by (1) training a parametric LM on Open License Corpus (OLC), a new corpus we curate with 228B tokens of public domain and permissively licensed text and (2) augmenting it with a more general and easily modifiable nonparametric datastore (e. g., containing copyrighted books or news) that is only queried during inference.

Language Modelling Sentence

Scaling Expert Language Models with Unsupervised Domain Discovery

1 code implementation24 Mar 2023 Suchin Gururangan, Margaret Li, Mike Lewis, Weijia Shi, Tim Althoff, Noah A. Smith, Luke Zettlemoyer

Large language models are typically trained densely: all parameters are updated with respect to all inputs.

Language Modelling

Editing Models with Task Arithmetic

3 code implementations8 Dec 2022 Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, Ali Farhadi

Changing how pre-trained models behave -- e. g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine learning systems.

Negation

lo-fi: distributed fine-tuning without communication

no code implementations19 Oct 2022 Mitchell Wortsman, Suchin Gururangan, Shen Li, Ali Farhadi, Ludwig Schmidt, Michael Rabbat, Ari S. Morcos

When fine-tuning DeiT-base and DeiT-large on ImageNet, this procedure matches accuracy in-distribution and improves accuracy under distribution shift compared to the baseline, which observes the same amount of data but communicates gradients at each step.

M2D2: A Massively Multi-domain Language Modeling Dataset

1 code implementation13 Oct 2022 Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer

We present M2D2, a fine-grained, massively multi-domain corpus for studying domain adaptation in language models (LMs).

Domain Generalization Language Modelling

Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models

2 code implementations5 Aug 2022 Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, Luke Zettlemoyer

New ELMs are learned by branching from (mixtures of) ELMs in the current set, further training the parameters on data for the new domain, and then merging the resulting model back into the set for future use.

kNN-Prompt: Nearest Neighbor Zero-Shot Inference

1 code implementation27 May 2022 Weijia Shi, Julian Michael, Suchin Gururangan, Luke Zettlemoyer

Retrieval-augmented language models (LMs) use non-parametric memory to substantially outperform their non-retrieval counterparts on perplexity-based evaluations, but it is an open question whether they achieve similar gains in few- and zero-shot end-task accuracy.

Domain Adaptation Language Modelling +6

Time Waits for No One! Analysis and Challenges of Temporal Misalignment

1 code implementation NAACL 2022 Kelvin Luu, Daniel Khashabi, Suchin Gururangan, Karishma Mandyam, Noah A. Smith

When an NLP model is trained on text data from one time period and tested or deployed on data from another, the resulting temporal misalignment can degrade end-task performance.

Expected Validation Performance and Estimation of a Random Variable's Maximum

no code implementations1 Oct 2021 Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, Noah A. Smith

We find that the two biased estimators lead to the fewest incorrect conclusions, which hints at the importance of minimizing variance and MSE.

DEMix Layers: Disentangling Domains for Modular Language Modeling

2 code implementations NAACL 2022 Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer

We introduce a new domain expert mixture (DEMix) layer that enables conditioning a language model (LM) on the domain of the input text.

Language Modelling

All That's `Human' Is Not Gold: Evaluating Human Evaluation of Generated Text

no code implementations ACL 2021 Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, Noah A. Smith

Human evaluations are typically considered the gold standard in natural language generation, but as models{'} fluency improves, how well can evaluators detect and judge machine-generated text?

nlg evaluation Text Generation

All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text

no code implementations30 Jun 2021 Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, Noah A. Smith

Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text?

nlg evaluation Text Generation

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

2 code implementations Findings of the Association for Computational Linguistics 2020 Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, Noah A. Smith

We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration.

Sentence Text Generation

Show Your Work: Improved Reporting of Experimental Results

4 code implementations IJCNLP 2019 Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, Noah A. Smith

Research in natural language processing proceeds, in part, by demonstrating that new models achieve superior performance (e. g., accuracy) on held-out test data, compared to previous results.

Annotation Artifacts in Natural Language Inference Data

no code implementations NAACL 2018 Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, Noah A. Smith

Large-scale datasets for natural language inference are created by presenting crowd workers with a sentence (premise), and asking them to generate three new sentences (hypotheses) that it entails, contradicts, or is logically neutral with respect to.

Natural Language Inference Negation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.