Search Results for author: Yanai Elazar

Found 38 papers, 20 papers with code

It’s not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT

1 code implementation EMNLP (BlackboxNLP) 2020 Hila Gonen, Shauli Ravfogel, Yanai Elazar, Yoav Goldberg

Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages.


Calibrating Large Language Models with Sample Consistency

no code implementations21 Feb 2024 Qing Lyu, Kumar Shridhar, Chaitanya Malaviya, Li Zhang, Yanai Elazar, Niket Tandon, Marianna Apidianaki, Mrinmaya Sachan, Chris Callison-Burch

Accurately gauging the confidence level of Large Language Models' (LLMs) predictions is pivotal for their reliable application.

Paloma: A Benchmark for Evaluating Language Model Fit

no code implementations16 Dec 2023 Ian Magnusson, Akshita Bhagia, Valentin Hofmann, Luca Soldaini, Ananya Harsh Jha, Oyvind Tafjord, Dustin Schwenk, Evan Pete Walsh, Yanai Elazar, Kyle Lo, Dirk Groeneveld, Iz Beltagy, Hannaneh Hajishirzi, Noah A. Smith, Kyle Richardson, Jesse Dodge

We invite submissions to our benchmark and organize results by comparability based on compliance with guidelines such as removal of benchmark contamination from pretraining.

Language Modelling

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

no code implementations16 Nov 2023 Yanai Elazar, Bhargavi Paranjape, Hao Peng, Sarah Wiegreffe, Khyathi Raghavi, Vivek Srikumar, Sameer Singh, Noah A. Smith

Previous work has found that datasets with paired inputs are prone to correlations between a specific part of the input (e. g., the hypothesis in NLI) and the label; consequently, models trained only on those outperform chance.

counterfactual In-Context Learning +2

What's In My Big Data?

1 code implementation31 Oct 2023 Yanai Elazar, Akshita Bhagia, Ian Magnusson, Abhilasha Ravichander, Dustin Schwenk, Alane Suhr, Pete Walsh, Dirk Groeneveld, Luca Soldaini, Sameer Singh, Hanna Hajishirzi, Noah A. Smith, Jesse Dodge

We open-source WIMBD's code and artifacts to provide a standard set of evaluations for new text-based corpora and to encourage more analyses and transparency around them: github. com/allenai/wimbd.


The Bias Amplification Paradox in Text-to-Image Generation

1 code implementation1 Aug 2023 Preethi Seshadri, Sameer Singh, Yanai Elazar

Bias amplification is a phenomenon in which models exacerbate biases or stereotypes present in the training data.

Text-to-Image Generation

Estimating the Causal Effect of Early ArXiving on Paper Acceptance

2 code implementations24 Jun 2023 Yanai Elazar, Jiayao Zhang, David Wadden, Bo Zhang, Noah A. Smith

However, since quality is a challenging construct to estimate, we use the negative outcome control method, using paper citation count as a control variable to debias the quality confounding effect.

Causal Inference

Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation

1 code implementation26 May 2023 Marius Mosbach, Tiago Pimentel, Shauli Ravfogel, Dietrich Klakow, Yanai Elazar

In this paper, we compare the generalization of few-shot fine-tuning and in-context learning to challenge datasets, while controlling for the models used, the number of examples, and the number of parameters, ranging from 125M to 30B.

Domain Generalization In-Context Learning

At Your Fingertips: Extracting Piano Fingering Instructions from Videos

no code implementations7 Mar 2023 Amit Moryossef, Yanai Elazar, Yoav Goldberg

Piano fingering -- knowing which finger to use to play each note in a musical piece, is a hard and important skill to master when learning to play the piano.

Lexical Generalization Improves with Larger Models and Longer Training

1 code implementation23 Oct 2022 Elron Bandel, Yoav Goldberg, Yanai Elazar

While fine-tuned language models perform well on many tasks, they were also shown to rely on superficial surface features such as lexical overlap.

Natural Language Inference Reading Comprehension

CIKQA: Learning Commonsense Inference with a Unified Knowledge-in-the-loop QA Paradigm

no code implementations12 Oct 2022 Hongming Zhang, Yintong Huo, Yanai Elazar, Yangqiu Song, Yoav Goldberg, Dan Roth

We first align commonsense tasks with relevant knowledge from commonsense knowledge bases and ask humans to annotate whether the knowledge is enough or not.

Question Answering Task 2

Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions

no code implementations28 Jul 2022 Yanai Elazar, Nora Kassner, Shauli Ravfogel, Amir Feder, Abhilasha Ravichander, Marius Mosbach, Yonatan Belinkov, Hinrich Schütze, Yoav Goldberg

Our causal framework and our results demonstrate the importance of studying datasets and the benefits of causality for understanding NLP models.

Text-based NP Enrichment

1 code implementation24 Sep 2021 Yanai Elazar, Victoria Basmov, Yoav Goldberg, Reut Tsarfaty

Understanding the relations between entities denoted by NPs in a text is a critical part of human-like natural language understanding.

Natural Language Understanding

Back to Square One: Artifact Detection, Training and Commonsense Disentanglement in the Winograd Schema

no code implementations EMNLP 2021 Yanai Elazar, Hongming Zhang, Yoav Goldberg, Dan Roth

To support this claim, we first show that the current evaluation method of WS is sub-optimal and propose a modification that uses twin sentences for evaluation.

Bias Detection Disentanglement +1

Contrastive Explanations for Model Interpretability

1 code implementation EMNLP 2021 Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, Yoav Goldberg

Our method is based on projecting model representation to a latent space that captures only the features that are useful (to the model) to differentiate two potential decisions.

text-classification Text Classification

Measuring and Improving Consistency in Pretrained Language Models

1 code implementation1 Feb 2021 Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg

In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge?

First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT

1 code implementation EACL 2021 Benjamin Muller, Yanai Elazar, Benoît Sagot, Djamé Seddah

Such transfer emerges by fine-tuning on a task of interest in one language and evaluating on a distinct language, not seen during the fine-tuning.

Language Modelling Zero-Shot Cross-Lingual Transfer

It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT

1 code implementation16 Oct 2020 Hila Gonen, Shauli Ravfogel, Yanai Elazar, Yoav Goldberg

Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages.


Do Language Embeddings Capture Scales?

no code implementations EMNLP (BlackboxNLP) 2020 Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, Dan Roth

Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge.

Common Sense Reasoning

Evaluating NLP Models via Contrast Sets

no code implementations1 Oct 2020 Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou

Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.

Reading Comprehension Sentiment Analysis +1

Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals

no code implementations1 Jun 2020 Yanai Elazar, Shauli Ravfogel, Alon Jacovi, Yoav Goldberg

In this work, we point out the inability to infer behavioral conclusions from probing results and offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.

Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

2 code implementations ACL 2020 Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, Yoav Goldberg

The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models.

Fairness Multi-class Classification +1

oLMpics -- On what Language Model Pre-training Captures

1 code implementation31 Dec 2019 Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan Berant

A fundamental challenge is to understand whether the performance of a LM on a task should be attributed to the pre-trained representations or to the process of fine-tuning on the task data.

Language Modelling

Adversarial Removal of Demographic Attributes Revisited

no code implementations IJCNLP 2019 Maria Barrett, Yova Kementchedjhieva, Yanai Elazar, Desmond Elliott, Anders S{\o}gaard

Elazar and Goldberg (2018) showed that protected attributes can be extracted from the representations of a debiased neural network for mention detection at above-chance levels, by evaluating a diagnostic classifier on a held-out subsample of the data it was trained on.

Where's My Head? Definition, Dataset and Models for Numeric Fused-Heads Identification and Resolution

1 code implementation26 May 2019 Yanai Elazar, Yoav Goldberg

We provide the first computational treatment of fused-heads constructions (FH), focusing on the numeric fused-heads (NFH).

Missing Elements Sentence

Where's My Head? Definition, Data Set, and Models for Numeric Fused-Head Identification and Resolution

no code implementations TACL 2019 Yanai Elazar, Yoav Goldberg

We provide the first computational treatment of fused-heads constructions (FHs), focusing on the numeric fused-heads (NFHs).


Adversarial Removal of Demographic Attributes from Text Data

1 code implementation EMNLP 2018 Yanai Elazar, Yoav Goldberg

Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.