Search Results for author: Rachel Rudinger

Found 31 papers, 13 papers with code

``You are grounded!'': Latent Name Artifacts in Pre-trained Language Models

no code implementations EMNLP 2020 Vered Shwartz, Rachel Rudinger, Oyvind Tafjord

Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models.

Reading Comprehension

Entailment Relation Aware Paraphrase Generation

no code implementations20 Mar 2022 Abhilasha Sancheti, Balaji Vasan Srinivasan, Rachel Rudinger

We introduce a new task of entailment relation aware paraphrase generation which aims at generating a paraphrase conforming to a given entailment relation (e. g. equivalent, forward entailing, or reverse entailing) with respect to a given input.

Natural Language Inference Paraphrase Generation +1

What do Large Language Models Learn about Scripts?

no code implementations27 Dec 2021 Abhilasha Sancheti, Rachel Rudinger

SIF is a two-staged framework that fine-tunes LM on a small set of ESD examples in the first stage.

MedNLI Is Not Immune: Natural Language Inference Artifacts in the Clinical Domain

1 code implementation ACL 2021 Christine Herlihy, Rachel Rudinger

Crowdworker-constructed natural language inference (NLI) datasets have been found to contain statistical artifacts associated with the annotation process that allow hypothesis-only classifiers to achieve better-than-random performance (Poliak et al., 2018; Gururanganet et al., 2018; Tsuchiya, 2018).

Natural Language Inference

Human Schema Curation via Causal Association Rule Mining

1 code implementation18 Apr 2021 Noah Weber, Anton Belyy, Nils Holzenberger, Rachel Rudinger, Benjamin Van Durme

Event schemas are structured knowledge sources defining typical real-world scenarios (e. g., going to an airport).

Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision

no code implementations14 Dec 2020 Faeze Brahman, Vered Shwartz, Rachel Rudinger, Yejin Choi

In this paper, we investigate the extent to which neural models can reason about natural language rationales that explain model predictions, relying only on distant supervision with no additional annotation cost for human-written rationales.

"You are grounded!": Latent Name Artifacts in Pre-trained Language Models

1 code implementation6 Apr 2020 Vered Shwartz, Rachel Rudinger, Oyvind Tafjord

Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models.

Reading Comprehension

Causal Inference of Script Knowledge

no code implementations EMNLP 2020 Noah Weber, Rachel Rudinger, Benjamin Van Durme

When does a sequence of events define an everyday scenario and how can this knowledge be induced from text?

Causal Inference

On Measuring Social Biases in Sentence Encoders

1 code implementation NAACL 2019 Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, Rachel Rudinger

The Word Embedding Association Test shows that GloVe and word2vec word embeddings exhibit human-like implicit biases based on gender, race, and other social constructs (Caliskan et al., 2017).

Word Embeddings

Cross-lingual Decompositional Semantic Parsing

no code implementations EMNLP 2018 Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh, Benjamin Van Durme

We introduce the task of cross-lingual decompositional semantic parsing: mapping content provided in a source language into a decompositional semantic analysis based on a target language.

Semantic Parsing

Lexicosyntactic Inference in Neural Models

no code implementations EMNLP 2018 Aaron Steven White, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme

We use this dataset, which we make publicly available, to probe the behavior of current state-of-the-art neural systems, showing that these systems make certain systematic errors that are clearly visible through the lens of factuality prediction.

Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation

no code implementations EMNLP (ACL) 2018 Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme

We present a large-scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning.

Natural Language Inference

Neural-Davidsonian Semantic Proto-role Labeling

1 code implementation EMNLP 2018 Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, Benjamin Van Durme

We present a model for semantic proto-role labeling (SPRL) using an adapted bidirectional LSTM encoding strategy that we call "Neural-Davidsonian": predicate-argument structure is represented as pairs of hidden states corresponding to predicate and argument head tokens of the input sequence.

Neural models of factuality

1 code implementation NAACL 2018 Rachel Rudinger, Aaron Steven White, Benjamin Van Durme

We present two neural models for event factuality prediction, which yield significant performance gains over previous models on three event factuality datasets: FactBank, UW, and MEANTIME.

Social Bias in Elicited Natural Language Inferences

1 code implementation WS 2017 Rachel Rudinger, Ch May, ler, Benjamin Van Durme

We analyze the Stanford Natural Language Inference (SNLI) corpus in an investigation of bias and stereotyping in NLP data.

Language Modelling Natural Language Inference +1

Ordinal Common-sense Inference

no code implementations TACL 2017 Sheng Zhang, Rachel Rudinger, Kevin Duh, Benjamin Van Durme

Humans have the capacity to draw common-sense inferences from natural language: various things that are likely but not certain to hold based on established discourse, and are rarely stated explicitly.

Common Sense Reasoning Natural Language Inference

Computational linking theory

no code implementations8 Oct 2016 Aaron Steven White, Drew Reisinger, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme

A linking theory explains how verbs' semantic arguments are mapped to their syntactic arguments---the inverse of the Semantic Role Labeling task from the shallow semantic parsing literature.

Semantic Parsing Semantic Role Labeling

Semantic Proto-Roles

no code implementations TACL 2015 Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, Benjamin Van Durme

We present the first large-scale, corpus based verification of Dowty{'}s seminal theory of proto-roles.

Semantic Role Labeling

Cannot find the paper you are looking for? You can Submit a new open access paper.