Search Results for author: Vered Shwartz

Found 46 papers, 32 papers with code

A Long Hard Look at MWEs in the Age of Language Models

no code implementations ACL (MWE) 2021 Vered Shwartz

In recent years, language models (LMs) have become almost synonymous with NLP.

Good Night at 4 pm?! Time Expressions in Different Cultures

1 code implementation Findings (ACL) 2022 Vered Shwartz

We propose 3 language-agnostic methods, one of which achieves promising results on gold standard annotations that we collected for a small number of languages.

Cultural Vocal Bursts Intensity Prediction

``You are grounded!'': Latent Name Artifacts in Pre-trained Language Models

no code implementations EMNLP 2020 Vered Shwartz, Rachel Rudinger, Oyvind Tafjord

Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models.

Reading Comprehension

COMET-M: Reasoning about Multiple Events in Complex Sentences

no code implementations24 May 2023 Sahithya Ravi, Raymond Ng, Vered Shwartz

We propose COMET-M (Multi-Event), an event-centric commonsense model capable of generating commonsense inferences for a target event within a complex sentence.

coreference-resolution

MemeCap: A Dataset for Captioning and Interpreting Memes

1 code implementation23 May 2023 EunJeong Hwang, Vered Shwartz

Memes are a widely popular tool for web users to express their thoughts using visual metaphors.

Image Captioning Meme Captioning +2

From chocolate bunny to chocolate crocodile: Do Language Models Understand Noun Compounds?

no code implementations17 May 2023 Jordan Coil, Vered Shwartz

Noun compound interpretation is the task of expressing a noun compound (e. g. chocolate bunny) in a free-text paraphrase that makes the relationship between the constituent nouns explicit (e. g. bunny-shaped chocolate).

VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge

1 code implementation24 Oct 2022 Sahithya Ravi, Aditya Chinchure, Leonid Sigal, Renjie Liao, Vered Shwartz

In contrast to previous methods which inject knowledge from static knowledge bases, we investigate the incorporation of contextualized knowledge using Commonsense Transformer (COMET), an existing knowledge model trained on human-curated knowledge bases.

Ranked #5 on Visual Question Answering (VQA) on A-OKVQA (DA VQA Score metric)

Question Answering Visual Question Answering

Uncovering Implicit Gender Bias in Narratives through Commonsense Inference

1 code implementation Findings (EMNLP) 2021 Tenghao Huang, Faeze Brahman, Vered Shwartz, Snigdha Chaturvedi

Pre-trained language models learn socially harmful biases from their training corpora, and may repeat these biases when used for generation.

Teach the Rules, Provide the Facts: Targeted Relational-knowledge Enhancement for Textual Inference

1 code implementation Joint Conference on Lexical and Computational Semantics 2021 Ohad Rozen, Shmuel Amar, Vered Shwartz, Ido Dagan

Our approach facilitates learning generic inference patterns requiring relational knowledge (e. g. inferences related to hypernymy) during training, while injecting on-demand the relevant relational facts (e. g. pangolin is an animal) at test time.

Surface Form Competition: Why the Highest Probability Answer Isn't Always Right

1 code implementation16 Apr 2021 Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer

Large language models have shown promising results in zero-shot settings (Brown et al., 2020; Radford et al., 2019).

Multiple-choice

Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision

no code implementations14 Dec 2020 Faeze Brahman, Vered Shwartz, Rachel Rudinger, Yejin Choi

In this paper, we investigate the extent to which neural models can reason about natural language rationales that explain model predictions, relying only on distant supervision with no additional annotation cost for human-written rationales.

Do Neural Language Models Overcome Reporting Bias?

1 code implementation COLING 2020 Vered Shwartz, Yejin Choi

Mining commonsense knowledge from corpora suffers from reporting bias, over-representing the rare at the expense of the trivial (Gordon and Van Durme, 2013).

Social Chemistry 101: Learning to Reason about Social and Moral Norms

1 code implementation EMNLP 2020 Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, Yejin Choi

We present Social Chemistry, a new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language.

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

1 code implementation EMNLP 2020 Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena Hwang, Ronan Le Bras, Antoine Bosselut, Yejin Choi

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future.

Text Infilling

Paragraph-level Commonsense Transformers with Recurrent Memory

1 code implementation4 Oct 2020 Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, Yejin Choi

Human understanding of narrative texts requires making commonsense inferences beyond what is stated explicitly in the text.

Commonsense Reasoning for Natural Language Processing

no code implementations ACL 2020 Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, Dan Roth

We organize this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research.

Navigate

"You are grounded!": Latent Name Artifacts in Pre-trained Language Models

1 code implementation6 Apr 2020 Vered Shwartz, Rachel Rudinger, Oyvind Tafjord

Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models.

Reading Comprehension

Diversify Your Datasets: Analyzing Generalization via Controlled Variance in Adversarial Datasets

1 code implementation CONLL 2019 Ohad Rozen, Vered Shwartz, Roee Aharoni, Ido Dagan

Phenomenon-specific "adversarial" datasets have been recently designed to perform targeted stress-tests for particular inference types.

A Systematic Comparison of English Noun Compound Representations

1 code implementation WS 2019 Vered Shwartz

Building meaningful representations of noun compounds is not trivial since many of them scarcely appear in the corpus.

Word Embeddings

Still a Pain in the Neck: Evaluating Text Representations on Lexical Composition

1 code implementation TACL 2019 Vered Shwartz, Ido Dagan

Building meaningful phrase representations is challenging because phrase meanings are not simply the sum of their constituent meanings.

Word Embeddings

Evaluating Text GANs as Language Models

1 code implementation NAACL 2019 Guy Tevet, Gavriel Habib, Vered Shwartz, Jonathan Berant

Generative Adversarial Networks (GANs) are a promising approach for text generation that, unlike traditional language models (LM), does not suffer from the problem of ``exposure bias''.

Text Generation

Olive Oil is Made \textitof Olives, Baby Oil is Made \textitfor Babies: Interpreting Noun Compounds Using Paraphrases in a Neural Model

no code implementations NAACL 2018 Vered Shwartz, Chris Waterson

Automatic interpretation of the relation between the constituents of a noun compound, e. g. olive oil (source) and baby oil (purpose) is an important task for many NLP applications.

Memorization Relation Classification

Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations

1 code implementation ACL 2018 Vered Shwartz, Ido Dagan

Revealing the implicit semantic relation between the constituents of a noun-compound is important for many NLP applications.

General Classification

Breaking NLI Systems with Sentences that Require Simple Lexical Inferences

2 code implementations ACL 2018 Max Glockner, Vered Shwartz, Yoav Goldberg

We create a new NLI test set that shows the deficiency of state-of-the-art models in inferences that require lexical and world knowledge.

Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment

no code implementations SEMEVAL 2018 Tu Vu, Vered Shwartz

Supervised distributional methods are applied successfully in lexical entailment, but recent work questioned whether these methods actually learn a relation between two words.

Lexical Entailment

Olive Oil is Made of Olives, Baby Oil is Made for Babies: Interpreting Noun Compounds using Paraphrases in a Neural Model

1 code implementation HLT 2018 Vered Shwartz, Chris Waterson

Automatic interpretation of the relation between the constituents of a noun compound, e. g. olive oil (source) and baby oil (purpose) is an important task for many NLP applications.

Memorization

A Consolidated Open Knowledge Representation for Multiple Texts

1 code implementation WS 2017 Rachel Wities, Vered Shwartz, Gabriel Stanovsky, Meni Adler, Ori Shapira, Shyam Upadhyay, Dan Roth, Eugenio Martinez Camara, Iryna Gurevych, Ido Dagan

We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open text-based manner.

Lexical Entailment Open Information Extraction

Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection

1 code implementation EACL 2017 Vered Shwartz, Enrico Santus, Dominik Schlechtweg

The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution.

Hypernym Discovery

CogALex-V Shared Task: LexNET - Integrated Path-based and Distributional Method for the Identification of Semantic Relations

1 code implementation WS 2016 Vered Shwartz, Ido Dagan

The reported results in the shared task bring this submission to the third place on subtask 1 (word relatedness), and the first place on subtask 2 (semantic relation classification), demonstrating the utility of integrating the complementary path-based and distributional information sources in recognizing concrete semantic relations.

Classification General Classification +1

Improving Hypernymy Detection with an Integrated Path-based and Distributional Method

1 code implementation ACL 2016 Vered Shwartz, Yoav Goldberg, Ido Dagan

Detecting hypernymy relations is a key task in NLP, which is addressed in the literature using two complementary approaches.

Cannot find the paper you are looking for? You can Submit a new open access paper.