Search Results for author: Vered Shwartz

Found 52 papers, 36 papers with code

``You are grounded!'': Latent Name Artifacts in Pre-trained Language Models

no code implementations EMNLP 2020 Vered Shwartz, Rachel Rudinger, Oyvind Tafjord

Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models.

Reading Comprehension

Good Night at 4 pm?! Time Expressions in Different Cultures

1 code implementation Findings (ACL) 2022 Vered Shwartz

We propose 3 language-agnostic methods, one of which achieves promising results on gold standard annotations that we collected for a small number of languages.

Cultural Vocal Bursts Intensity Prediction

Stance Reasoner: Zero-Shot Stance Detection on Social Media with Explicit Reasoning

1 code implementation22 Mar 2024 Maksym Taranukhin, Vered Shwartz, Evangelos Milios

We present Stance Reasoner, an approach to zero-shot stance detection on social media that leverages explicit reasoning over background knowledge to guide the model's inference about the document's stance on a target.

Few-Shot Stance Detection In-Context Learning +3

Empowering Air Travelers: A Chatbot for Canadian Air Passenger Rights

no code implementations19 Mar 2024 Maksym Taranukhin, Sahithya Ravi, Gabor Lukacs, Evangelos Milios, Vered Shwartz

Recognizing this demand, we present a chatbot to assist passengers and educate them about their rights.

Chatbot Retrieval

Small But Funny: A Feedback-Driven Approach to Humor Distillation

no code implementations28 Feb 2024 Sahithya Ravi, Patrick Huber, Akshat Shrivastava, Aditya Sagar, Ahmed Aly, Vered Shwartz, Arash Einolghozati

The emergence of Large Language Models (LLMs) has brought to light promising language generation capabilities, particularly in performing tasks like complex reasoning and creative writing.

Text Generation

CASE: Commonsense-Augmented Score with an Expanded Answer Space

1 code implementation3 Nov 2023 Wenkai Chen, Sahithya Ravi, Vered Shwartz

One of the major limitations of the basic score is that it treats all words as equally important.

Multiple-choice

Automatic Evaluation of Generative Models with Instruction Tuning

1 code implementation30 Oct 2023 Shuhaib Mehri, Vered Shwartz

Automatic evaluation of natural language generation has long been an elusive goal in NLP. A recent paradigm fine-tunes pre-trained language models to emulate human judgements for a particular task and evaluation criterion.

Text Generation

GD-COMET: A Geo-Diverse Commonsense Inference Model

no code implementations23 Oct 2023 Mehar Bhatia, Vered Shwartz

With the increasing integration of AI into everyday life, it's becoming crucial to design AI systems that serve users from diverse backgrounds by making them culturally aware.

COMET-M: Reasoning about Multiple Events in Complex Sentences

1 code implementation24 May 2023 Sahithya Ravi, Raymond Ng, Vered Shwartz

We propose COMET-M (Multi-Event), an event-centric commonsense model capable of generating commonsense inferences for a target event within a complex sentence.

coreference-resolution Sentence

MemeCap: A Dataset for Captioning and Interpreting Memes

2 code implementations23 May 2023 EunJeong Hwang, Vered Shwartz

Memes are a widely popular tool for web users to express their thoughts using visual metaphors.

Image Captioning Meme Captioning +2

From chocolate bunny to chocolate crocodile: Do Language Models Understand Noun Compounds?

no code implementations17 May 2023 Jordan Coil, Vered Shwartz

Noun compound interpretation is the task of expressing a noun compound (e. g. chocolate bunny) in a free-text paraphrase that makes the relationship between the constituent nouns explicit (e. g. bunny-shaped chocolate).

VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge

1 code implementation24 Oct 2022 Sahithya Ravi, Aditya Chinchure, Leonid Sigal, Renjie Liao, Vered Shwartz

In contrast to previous methods which inject knowledge from static knowledge bases, we investigate the incorporation of contextualized knowledge using Commonsense Transformer (COMET), an existing knowledge model trained on human-curated knowledge bases.

Ranked #8 on Visual Question Answering (VQA) on A-OKVQA (DA VQA Score metric)

Question Answering Visual Question Answering

Uncovering Implicit Gender Bias in Narratives through Commonsense Inference

1 code implementation Findings (EMNLP) 2021 Tenghao Huang, Faeze Brahman, Vered Shwartz, Snigdha Chaturvedi

Pre-trained language models learn socially harmful biases from their training corpora, and may repeat these biases when used for generation.

Teach the Rules, Provide the Facts: Targeted Relational-knowledge Enhancement for Textual Inference

1 code implementation Joint Conference on Lexical and Computational Semantics 2021 Ohad Rozen, Shmuel Amar, Vered Shwartz, Ido Dagan

Our approach facilitates learning generic inference patterns requiring relational knowledge (e. g. inferences related to hypernymy) during training, while injecting on-demand the relevant relational facts (e. g. pangolin is an animal) at test time.

Surface Form Competition: Why the Highest Probability Answer Isn't Always Right

2 code implementations16 Apr 2021 Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer

Large language models have shown promising results in zero-shot settings (Brown et al., 2020; Radford et al., 2019).

Multiple-choice valid

Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision

no code implementations14 Dec 2020 Faeze Brahman, Vered Shwartz, Rachel Rudinger, Yejin Choi

In this paper, we investigate the extent to which neural models can reason about natural language rationales that explain model predictions, relying only on distant supervision with no additional annotation cost for human-written rationales.

Do Neural Language Models Overcome Reporting Bias?

1 code implementation COLING 2020 Vered Shwartz, Yejin Choi

Mining commonsense knowledge from corpora suffers from reporting bias, over-representing the rare at the expense of the trivial (Gordon and Van Durme, 2013).

Social Chemistry 101: Learning to Reason about Social and Moral Norms

2 code implementations EMNLP 2020 Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, Yejin Choi

We present Social Chemistry, a new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language.

Attribute

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

1 code implementation EMNLP 2020 Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena Hwang, Ronan Le Bras, Antoine Bosselut, Yejin Choi

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future.

counterfactual Counterfactual Reasoning +1

Paragraph-level Commonsense Transformers with Recurrent Memory

1 code implementation4 Oct 2020 Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, Yejin Choi

Human understanding of narrative texts requires making commonsense inferences beyond what is stated explicitly in the text.

Sentence World Knowledge

Commonsense Reasoning for Natural Language Processing

no code implementations ACL 2020 Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, Dan Roth

We organize this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research.

Navigate

"You are grounded!": Latent Name Artifacts in Pre-trained Language Models

1 code implementation6 Apr 2020 Vered Shwartz, Rachel Rudinger, Oyvind Tafjord

Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models.

Reading Comprehension

Diversify Your Datasets: Analyzing Generalization via Controlled Variance in Adversarial Datasets

1 code implementation CONLL 2019 Ohad Rozen, Vered Shwartz, Roee Aharoni, Ido Dagan

Phenomenon-specific "adversarial" datasets have been recently designed to perform targeted stress-tests for particular inference types.

A Systematic Comparison of English Noun Compound Representations

1 code implementation WS 2019 Vered Shwartz

Building meaningful representations of noun compounds is not trivial since many of them scarcely appear in the corpus.

Word Embeddings

Still a Pain in the Neck: Evaluating Text Representations on Lexical Composition

1 code implementation TACL 2019 Vered Shwartz, Ido Dagan

Building meaningful phrase representations is challenging because phrase meanings are not simply the sum of their constituent meanings.

Word Embeddings

Evaluating Text GANs as Language Models

1 code implementation NAACL 2019 Guy Tevet, Gavriel Habib, Vered Shwartz, Jonathan Berant

Generative Adversarial Networks (GANs) are a promising approach for text generation that, unlike traditional language models (LM), does not suffer from the problem of ``exposure bias''.

Text Generation

Olive Oil is Made \textitof Olives, Baby Oil is Made \textitfor Babies: Interpreting Noun Compounds Using Paraphrases in a Neural Model

no code implementations NAACL 2018 Vered Shwartz, Chris Waterson

Automatic interpretation of the relation between the constituents of a noun compound, e. g. olive oil (source) and baby oil (purpose) is an important task for many NLP applications.

Memorization Relation +1

Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations

1 code implementation ACL 2018 Vered Shwartz, Ido Dagan

Revealing the implicit semantic relation between the constituents of a noun-compound is important for many NLP applications.

General Classification

Breaking NLI Systems with Sentences that Require Simple Lexical Inferences

2 code implementations ACL 2018 Max Glockner, Vered Shwartz, Yoav Goldberg

We create a new NLI test set that shows the deficiency of state-of-the-art models in inferences that require lexical and world knowledge.

World Knowledge

Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment

no code implementations SEMEVAL 2018 Tu Vu, Vered Shwartz

Supervised distributional methods are applied successfully in lexical entailment, but recent work questioned whether these methods actually learn a relation between two words.

Lexical Entailment

Olive Oil is Made of Olives, Baby Oil is Made for Babies: Interpreting Noun Compounds using Paraphrases in a Neural Model

1 code implementation HLT 2018 Vered Shwartz, Chris Waterson

Automatic interpretation of the relation between the constituents of a noun compound, e. g. olive oil (source) and baby oil (purpose) is an important task for many NLP applications.

Memorization Relation

A Consolidated Open Knowledge Representation for Multiple Texts

1 code implementation WS 2017 Rachel Wities, Vered Shwartz, Gabriel Stanovsky, Meni Adler, Ori Shapira, Shyam Upadhyay, Dan Roth, Eugenio Martinez Camara, Iryna Gurevych, Ido Dagan

We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open text-based manner.

Lexical Entailment Open Information Extraction

Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection

1 code implementation EACL 2017 Vered Shwartz, Enrico Santus, Dominik Schlechtweg

The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution.

Hypernym Discovery

CogALex-V Shared Task: LexNET - Integrated Path-based and Distributional Method for the Identification of Semantic Relations

1 code implementation WS 2016 Vered Shwartz, Ido Dagan

The reported results in the shared task bring this submission to the third place on subtask 1 (word relatedness), and the first place on subtask 2 (semantic relation classification), demonstrating the utility of integrating the complementary path-based and distributional information sources in recognizing concrete semantic relations.

Classification General Classification +2

Improving Hypernymy Detection with an Integrated Path-based and Distributional Method

1 code implementation ACL 2016 Vered Shwartz, Yoav Goldberg, Ido Dagan

Detecting hypernymy relations is a key task in NLP, which is addressed in the literature using two complementary approaches.

Cannot find the paper you are looking for? You can Submit a new open access paper.