1 code implementation • EMNLP (BlackboxNLP) 2020 • Hila Gonen, Shauli Ravfogel, Yanai Elazar, Yoav Goldberg
Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages.
no code implementations • 28 Jul 2022 • Yanai Elazar, Nora Kassner, Shauli Ravfogel, Amir Feder, Abhilasha Ravichander, Marius Mosbach, Yonatan Belinkov, Hinrich Schütze, Yoav Goldberg
Our causal framework and our results demonstrate the importance of studying datasets and the benefits of causality for understanding NLP models.
no code implementations • RepL4NLP (ACL) 2022 • Hila Gonen, Shauli Ravfogel, Yoav Goldberg
Multilingual language models were shown to allow for nontrivial transfer across scripts and languages.
1 code implementation • 28 Jan 2022 • Shauli Ravfogel, Michael Twiton, Yoav Goldberg, Ryan Cotterell
Modern neural models trained on textual data rely on pre-trained representations that emerge without direct supervision.
no code implementations • 28 Jan 2022 • Shauli Ravfogel, Francisco Vargas, Yoav Goldberg, Ryan Cotterell
The representation space of neural models for textual data emerges in an unsupervised manner during training.
3 code implementations • ACL 2022 • Elad Ben Zaken, Shauli Ravfogel, Yoav Goldberg
We introduce BitFit, a sparse-finetuning method where only the bias-terms of the model (or a subset of them) are being modified.
no code implementations • ACL 2021 • Shauli Ravfogel, Hillel Taub-Tabib, Yoav Goldberg
We advocate for a search paradigm called ``extractive search'', in which a search query is enriched with capture-slots, to allow for such rapid extraction.
no code implementations • CoNLL (EMNLP) 2021 • Shauli Ravfogel, Grusha Prasad, Tal Linzen, Yoav Goldberg
We apply this method to study how BERT models of different sizes process relative clauses (RCs).
1 code implementation • EMNLP 2021 • Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, Yoav Goldberg
Our method is based on projecting model representation to a latent space that captures only the features that are useful (to the model) to differentiate two potential decisions.
1 code implementation • 1 Feb 2021 • Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg
In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge?
1 code implementation • 16 Oct 2020 • Hila Gonen, Shauli Ravfogel, Yanai Elazar, Yoav Goldberg
Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages.
no code implementations • EMNLP (insights) 2020 • Yanai Elazar, Victoria Basmov, Shauli Ravfogel, Yoav Goldberg, Reut Tsarfaty
In this work, we follow known methodologies of collecting labeled data for the complement coercion phenomenon.
1 code implementation • EMNLP (BlackboxNLP) 2020 • Shauli Ravfogel, Yanai Elazar, Jacob Goldberger, Yoav Goldberg
Contextualized word representations, such as ELMo and BERT, were shown to perform well on various semantic and syntactic tasks.
no code implementations • 1 Jun 2020 • Yanai Elazar, Shauli Ravfogel, Alon Jacovi, Yoav Goldberg
In this work, we point out the inability to infer behavioral conclusions from probing results and offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.
1 code implementation • ACL 2020 • Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, Yoav Goldberg
The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models.
1 code implementation • NAACL 2021 • Carlo Meloni, Shauli Ravfogel, Yoav Goldberg
Historical linguists have identified regularities in the process of historic sound change.
2 code implementations • NAACL 2019 • Shauli Ravfogel, Yoav Goldberg, Tal Linzen
How do typological properties such as word order and morphological case marking affect the ability of neural sequence models to acquire the syntax of a language?
no code implementations • WS 2018 • Shauli Ravfogel, Francis M. Tyers, Yoav Goldberg
We propose the Basque agreement prediction task as challenging benchmark for models that attempt to learn regularities in human language.