no code implementations • ACL (MWE) 2021 • Vered Shwartz
In recent years, language models (LMs) have become almost synonymous with NLP.
1 code implementation • Findings (ACL) 2022 • Vered Shwartz
We propose 3 language-agnostic methods, one of which achieves promising results on gold standard annotations that we collected for a small number of languages.
1 code implementation • EMNLP 2021 • Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer
Large language models have shown promising results in zero-shot settings.
no code implementations • EMNLP 2020 • Vered Shwartz, Rachel Rudinger, Oyvind Tafjord
Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models.
no code implementations • 24 May 2023 • Sahithya Ravi, Raymond Ng, Vered Shwartz
We propose COMET-M (Multi-Event), an event-centric commonsense model capable of generating commonsense inferences for a target event within a complex sentence.
no code implementations • 24 May 2023 • Natalie Shapira, Mosh Levy, Seyed Hossein Alavi, Xuhui Zhou, Yejin Choi, Yoav Goldberg, Maarten Sap, Vered Shwartz
The escalating debate on AI's capabilities warrants developing reliable metrics to assess machine "intelligence".
1 code implementation • 23 May 2023 • EunJeong Hwang, Vered Shwartz
Memes are a widely popular tool for web users to express their thoughts using visual metaphors.
no code implementations • 17 May 2023 • Jordan Coil, Vered Shwartz
Noun compound interpretation is the task of expressing a noun compound (e. g. chocolate bunny) in a free-text paraphrase that makes the relationship between the constituent nouns explicit (e. g. bunny-shaped chocolate).
1 code implementation • 20 Feb 2023 • Sahithya Ravi, Chris Tanner, Raymond Ng, Vered Shwartz
Event coreference models cluster event mentions pertaining to the same real-world event.
1 code implementation • 24 Oct 2022 • Sahithya Ravi, Aditya Chinchure, Leonid Sigal, Renjie Liao, Vered Shwartz
In contrast to previous methods which inject knowledge from static knowledge bases, we investigate the incorporation of contextualized knowledge using Commonsense Transformer (COMET), an existing knowledge model trained on human-curated knowledge bases.
Ranked #5 on
Visual Question Answering (VQA)
on A-OKVQA
(DA VQA Score metric)
1 code implementation • Findings (EMNLP) 2021 • Tenghao Huang, Faeze Brahman, Vered Shwartz, Snigdha Chaturvedi
Pre-trained language models learn socially harmful biases from their training corpora, and may repeat these biases when used for generation.
1 code implementation • 31 Aug 2021 • Tuhin Chakrabarty, Yejin Choi, Vered Shwartz
Figurative language is ubiquitous in English.
1 code implementation • Joint Conference on Lexical and Computational Semantics 2021 • Ohad Rozen, Shmuel Amar, Vered Shwartz, Ido Dagan
Our approach facilitates learning generic inference patterns requiring relational knowledge (e. g. inferences related to hypernymy) during training, while injecting on-demand the relevant relational facts (e. g. pangolin is an animal) at test time.
1 code implementation • 16 Apr 2021 • Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer
Large language models have shown promising results in zero-shot settings (Brown et al., 2020; Radford et al., 2019).
no code implementations • 14 Dec 2020 • Faeze Brahman, Vered Shwartz, Rachel Rudinger, Yejin Choi
In this paper, we investigate the extent to which neural models can reason about natural language rationales that explain model predictions, relying only on distant supervision with no additional annotation cost for human-written rationales.
1 code implementation • COLING 2020 • Vered Shwartz, Yejin Choi
Mining commonsense knowledge from corpora suffers from reporting bias, over-representing the rare at the expense of the trivial (Gordon and Van Durme, 2013).
1 code implementation • EMNLP 2020 • Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, Yejin Choi
We present Social Chemistry, a new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Rachel Rudinger, Vered Shwartz, Jena D. Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A. Smith, Yejin Choi
Defeasible inference is a mode of reasoning in which an inference (X is a bird, therefore X flies) may be weakened or overturned in light of new evidence (X is a penguin).
1 code implementation • EMNLP 2020 • Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena Hwang, Ronan Le Bras, Antoine Bosselut, Yejin Choi
Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future.
1 code implementation • 4 Oct 2020 • Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, Yejin Choi
Human understanding of narrative texts requires making commonsense inferences beyond what is stated explicitly in the text.
no code implementations • ACL 2020 • Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, Dan Roth
We organize this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Yehudit Meged, Avi Caciularu, Vered Shwartz, Ido Dagan
We study the potential synergy between two different NLP tasks, both confronting predicate lexical variability: identifying predicate paraphrases, and event coreference resolution.
1 code implementation • EMNLP 2020 • Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi
Natural language understanding involves reading between the lines with implicit background knowledge.
1 code implementation • 6 Apr 2020 • Vered Shwartz, Rachel Rudinger, Oyvind Tafjord
Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models.
1 code implementation • CONLL 2019 • Ohad Rozen, Vered Shwartz, Roee Aharoni, Ido Dagan
Phenomenon-specific "adversarial" datasets have been recently designed to perform targeted stress-tests for particular inference types.
1 code implementation • WS 2019 • Vered Shwartz
Building meaningful representations of noun compounds is not trivial since many of them scarcely appear in the corpus.
1 code implementation • ACL 2019 • Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, Ido Dagan
Our analysis confirms that all our representation elements, including the mention span itself, its context, and the relation to other mentions contribute to the model's success.
coreference-resolution
Cross Document Coreference Resolution
+3
1 code implementation • TACL 2019 • Vered Shwartz, Ido Dagan
Building meaningful phrase representations is challenging because phrase meanings are not simply the sum of their constituent meanings.
1 code implementation • NAACL 2019 • Guy Tevet, Gavriel Habib, Vered Shwartz, Jonathan Berant
Generative Adversarial Networks (GANs) are a promising approach for text generation that, unlike traditional language models (LM), does not suffer from the problem of ``exposure bias''.
no code implementations • SEMEVAL 2018 • Jose Camacho-Collados, Claudio Delli Bovi, Luis Espinosa-Anke, Sergio Oramas, Tommaso Pasini, Enrico Santus, Vered Shwartz, Roberto Navigli, Horacio Saggion
This paper describes the SemEval 2018 Shared Task on Hypernym Discovery.
no code implementations • NAACL 2018 • Vered Shwartz, Chris Waterson
Automatic interpretation of the relation between the constituents of a noun compound, e. g. olive oil (source) and baby oil (purpose) is an important task for many NLP applications.
1 code implementation • ACL 2018 • Vered Shwartz, Ido Dagan
Revealing the implicit semantic relation between the constituents of a noun-compound is important for many NLP applications.
2 code implementations • ACL 2018 • Max Glockner, Vered Shwartz, Yoav Goldberg
We create a new NLI test set that shows the deficiency of state-of-the-art models in inferences that require lexical and world knowledge.
no code implementations • SEMEVAL 2018 • Tu Vu, Vered Shwartz
Supervised distributional methods are applied successfully in lexical entailment, but recent work questioned whether these methods actually learn a relation between two words.
1 code implementation • HLT 2018 • Vered Shwartz, Chris Waterson
Automatic interpretation of the relation between the constituents of a noun compound, e. g. olive oil (source) and baby oil (purpose) is an important task for many NLP applications.
no code implementations • SEMEVAL 2017 • Vered Shwartz, Gabriel Stanovsky, Ido Dagan
We present a simple method for ever-growing extraction of predicate paraphrases from news headlines in Twitter.
no code implementations • SEMEVAL 2017 • Sneha Rajana, Chris Callison-Burch, Marianna Apidianaki, Vered Shwartz
Recognizing and distinguishing antonyms from other types of semantic relations is an essential part of language understanding systems.
1 code implementation • WS 2017 • Rachel Wities, Vered Shwartz, Gabriel Stanovsky, Meni Adler, Ori Shapira, Shyam Upadhyay, Dan Roth, Eugenio Martinez Camara, Iryna Gurevych, Ido Dagan
We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open text-based manner.
1 code implementation • EACL 2017 • Vered Shwartz, Enrico Santus, Dominik Schlechtweg
The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution.
Ranked #7 on
Hypernym Discovery
on Music domain
1 code implementation • WS 2016 • Vered Shwartz, Ido Dagan
The reported results in the shared task bring this submission to the third place on subtask 1 (word relatedness), and the first place on subtask 2 (semantic relation classification), demonstrating the utility of integrating the complementary path-based and distributional information sources in recognizing concrete semantic relations.
1 code implementation • WS 2016 • Vered Shwartz, Ido Dagan
Recognizing various semantic relations between terms is beneficial for many NLP tasks.
1 code implementation • ACL 2016 • Vered Shwartz, Yoav Goldberg, Ido Dagan
Detecting hypernymy relations is a key task in NLP, which is addressed in the literature using two complementary approaches.