no code implementations • EMNLP 2020 • Vered Shwartz, Rachel Rudinger, Oyvind Tafjord
Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models.
no code implementations • 20 Mar 2022 • Abhilasha Sancheti, Balaji Vasan Srinivasan, Rachel Rudinger
We introduce a new task of entailment relation aware paraphrase generation which aims at generating a paraphrase conforming to a given entailment relation (e. g. equivalent, forward entailing, or reverse entailing) with respect to a given input.
no code implementations • 27 Dec 2021 • Abhilasha Sancheti, Rachel Rudinger
SIF is a two-staged framework that fine-tunes LM on a small set of ESD examples in the first stage.
1 code implementation • ACL 2021 • Christine Herlihy, Rachel Rudinger
Crowdworker-constructed natural language inference (NLI) datasets have been found to contain statistical artifacts associated with the annotation process that allow hypothesis-only classifiers to achieve better-than-random performance (Poliak et al., 2018; Gururanganet et al., 2018; Tsuchiya, 2018).
1 code implementation • 18 Apr 2021 • Noah Weber, Anton Belyy, Nils Holzenberger, Rachel Rudinger, Benjamin Van Durme
Event schemas are structured knowledge sources defining typical real-world scenarios (e. g., going to an airport).
no code implementations • 14 Dec 2020 • Faeze Brahman, Vered Shwartz, Rachel Rudinger, Yejin Choi
In this paper, we investigate the extent to which neural models can reason about natural language rationales that explain model predictions, relying only on distant supervision with no additional annotation cost for human-written rationales.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Rachel Rudinger, Vered Shwartz, Jena D. Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A. Smith, Yejin Choi
Defeasible inference is a mode of reasoning in which an inference (X is a bird, therefore X flies) may be weakened or overturned in light of new evidence (X is a penguin).
1 code implementation • 6 Apr 2020 • Vered Shwartz, Rachel Rudinger, Oyvind Tafjord
Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models.
no code implementations • EMNLP 2020 • Noah Weber, Rachel Rudinger, Benjamin Van Durme
When does a sequence of events define an everyday scenario and how can this knowledge be induced from text?
1 code implementation • LREC 2020 • Aaron Steven White, Elias Stengel-Eskin, Siddharth Vashishtha, Venkata Govindarajan, Dee Ann Reisinger, Tim Vieira, Keisuke Sakaguchi, Sheng Zhang, Francis Ferraro, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
We present the Universal Decompositional Semantics (UDS) dataset (v1. 0), which is bundled with the Decomp toolkit (v0. 1).
1 code implementation • NAACL 2019 • Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, Rachel Rudinger
The Word Embedding Association Test shows that GloVe and word2vec word embeddings exhibit human-like implicit biases based on gender, race, and other social constructs (Caliskan et al., 2017).
no code implementations • 11 Jan 2019 • J. Edward Hu, Rachel Rudinger, Matt Post, Benjamin Van Durme
We present ParaBank, a large-scale English paraphrase dataset that surpasses prior work in both quantity and quality.
no code implementations • EMNLP 2018 • Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh, Benjamin Van Durme
We introduce the task of cross-lingual decompositional semantic parsing: mapping content provided in a source language into a decompositional semantic analysis based on a target language.
no code implementations • EMNLP 2018 • Aaron Steven White, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
We use this dataset, which we make publicly available, to probe the behavior of current state-of-the-art neural systems, showing that these systems make certain systematic errors that are clearly visible through the lens of factuality prediction.
1 code implementation • SEMEVAL 2018 • Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme
We propose a hypothesis only baseline for diagnosing Natural Language Inference (NLI).
1 code implementation • NAACL 2018 • Rachel Rudinger, Jason Naradowsky, Brian Leonard, Benjamin Van Durme
We present an empirical study of gender bias in coreference resolution systems.
no code implementations • EMNLP (ACL) 2018 • Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme
We present a large-scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning.
1 code implementation • EMNLP 2018 • Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, Benjamin Van Durme
We present a model for semantic proto-role labeling (SPRL) using an adapted bidirectional LSTM encoding strategy that we call "Neural-Davidsonian": predicate-argument structure is represented as pairs of hidden states corresponding to predicate and argument head tokens of the input sequence.
1 code implementation • NAACL 2018 • Rachel Rudinger, Aaron Steven White, Benjamin Van Durme
We present two neural models for event factuality prediction, which yield significant performance gains over previous models on three event factuality datasets: FactBank, UW, and MEANTIME.
1 code implementation • WS 2017 • Rachel Rudinger, Ch May, ler, Benjamin Van Durme
We analyze the Stanford Natural Language Inference (SNLI) corpus in an investigation of bias and stereotyping in NLP data.
no code implementations • TACL 2017 • Sheng Zhang, Rachel Rudinger, Kevin Duh, Benjamin Van Durme
Humans have the capacity to draw common-sense inferences from natural language: various things that are likely but not certain to hold based on established discourse, and are rarely stated explicitly.
no code implementations • 8 Oct 2016 • Aaron Steven White, Drew Reisinger, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
A linking theory explains how verbs' semantic arguments are mapped to their syntactic arguments---the inverse of the Semantic Role Labeling task from the shallow semantic parsing literature.
no code implementations • TACL 2015 • Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, Benjamin Van Durme
We present the first large-scale, corpus based verification of Dowty{'}s seminal theory of proto-roles.