Search Results for author: Benjamin Van Durme

Found 138 papers, 41 papers with code

Few-Shot Semantic Parsing with Language Models Trained On Code

no code implementations16 Dec 2021 Richard Shin, Benjamin Van Durme

Intuitively, such models can more easily output canonical utterances as they are closer to the natural language used for pre-training.

Semantic Parsing

BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation

1 code implementation EMNLP 2021 Haoran Xu, Benjamin Van Durme, Kenton Murray

The success of bidirectional encoders using masked language models, such as BERT, on numerous natural language processing tasks has prompted researchers to attempt to incorporate these pre-trained models into neural machine translation (NMT) systems.

Language Modelling Machine Translation +1

Guided Generation of Cause and Effect

no code implementations21 Jul 2021 Zhongyang Li, Xiao Ding, Ting Liu, J. Edward Hu, Benjamin Van Durme

We present a conditional text generation framework that posits sentential expressions of possible causes and effects.

Conditional Text Generation Knowledge Graphs

Factoring Statutory Reasoning as Language Understanding Challenges

1 code implementation ACL 2021 Nils Holzenberger, Benjamin Van Durme

Statutory reasoning is the task of determining whether a legal statute, stated in natural language, applies to the text description of a case.

Natural Language Inference

Human Schema Curation via Causal Association Rule Mining

1 code implementation18 Apr 2021 Noah Weber, Anton Belyy, Nils Holzenberger, Rachel Rudinger, Benjamin Van Durme

Event schemas are structured knowledge sources defining typical real-world scenarios (e. g., going to an airport).

Moving on from OntoNotes: Coreference Resolution Model Transfer

no code implementations EMNLP 2021 Patrick Xia, Benjamin Van Durme

Academic neural models for coreference resolution (coref) are typically trained on a single dataset, OntoNotes, and model improvements are benchmarked on that same dataset.

Coreference Resolution

Joint Universal Syntactic and Semantic Parsing

1 code implementation12 Apr 2021 Elias Stengel-Eskin, Kenton Murray, Sheng Zhang, Aaron Steven White, Benjamin Van Durme

While numerous attempts have been made to jointly parse syntax and semantics, high performance in one domain typically comes at the price of performance in the other.

Semantic Parsing

InFillmore: Frame-Guided Language Generation with Bidirectional Context

no code implementations Joint Conference on Lexical and Computational Semantics 2021 Jiefu Ou, Nathaniel Weir, Anton Belyy, Felix Yu, Benjamin Van Durme

We propose a structured extension to bidirectional-context conditional language generation, or "infilling," inspired by Frame Semantic theory (Fillmore, 1976).

Text Infilling

Gradual Fine-Tuning for Low-Resource Domain Adaptation

1 code implementation EACL (AdaptNLP) 2021 Haoran Xu, Seth Ebner, Mahsa Yarmohammadi, Aaron Steven White, Benjamin Van Durme, Kenton Murray

Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain.

Domain Adaptation

Joint Modeling of Arguments for Event Understanding

1 code implementation20 Nov 2020 Yunmo Chen, Tongfei Chen, Benjamin Van Durme

We recognize the task of event argument linking in documents as similar to that of intent slot resolution in dialogue, providing a Transformer-based model that extends from a recently proposed solution to resolve references to slots.

COD3S: Diverse Generation with Discrete Semantic Signatures

1 code implementation EMNLP 2020 Nathaniel Weir, João Sedoc, Benjamin Van Durme

We present COD3S, a novel method for generating semantically diverse sentences using neural sequence-to-sequence (seq2seq) models.

Semantic Textual Similarity

Script Induction as Association Rule Mining

no code implementations WS 2020 Anton Belyy, Benjamin Van Durme

We show that the count-based Script Induction models of Chambers and Jurafsky (2008) and Jans et al. (2012) can be unified in a general framework of narrative chain likelihood maximization.

Cloze Test

Iterative Paraphrastic Augmentation with Discriminative Span Alignment

no code implementations1 Jul 2020 Ryan Culkin, J. Edward Hu, Elias Stengel-Eskin, Guanghui Qin, Benjamin Van Durme

We introduce a novel paraphrastic augmentation strategy based on sentence-level lexically constrained paraphrasing and discriminative span alignment.

Incremental Neural Coreference Resolution in Constant Memory

no code implementations EMNLP 2020 Patrick Xia, João Sedoc, Benjamin Van Durme

We investigate modeling coreference resolution under a fixed memory constraint by extending an incremental clustering algorithm to utilize contextualized encoders and neural components.

Coreference Resolution

Complementing Lexical Retrieval with Semantic Residual Embedding

no code implementations29 Apr 2020 Luyu Gao, Zhuyun Dai, Tongfei Chen, Zhen Fan, Benjamin Van Durme, Jamie Callan

This paper presents CLEAR, a retrieval model that seeks to complement classical lexical exact-match models such as BM25 with semantic matching signals from a neural embedding matching model.

Information Retrieval

Probing Neural Language Models for Human Tacit Assumptions

no code implementations10 Apr 2020 Nathaniel Weir, Adam Poliak, Benjamin Van Durme

Our prompts are based on human responses in a psychological study of conceptual associations.

Hierarchical Entity Typing via Multi-level Learning to Rank

1 code implementation ACL 2020 Tongfei Chen, Yunmo Chen, Benjamin Van Durme

We propose a novel method for hierarchical entity classification that embraces ontological structure at both training and during prediction.

Entity Typing Learning-To-Rank

Causal Inference of Script Knowledge

no code implementations EMNLP 2020 Noah Weber, Rachel Rudinger, Benjamin Van Durme

When does a sequence of events define an everyday scenario and how can this knowledge be induced from text?

Causal Inference

Reading the Manual: Event Extraction as Definition Comprehension

no code implementations EMNLP (spnlp) 2020 Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, Benjamin Van Durme

We ask whether text understanding has progressed to where we may extract event information through incremental refinement of bleached statements derived from annotation manuals.

Event Extraction

Multi-Sentence Argument Linking

no code implementations ACL 2020 Seth Ebner, Patrick Xia, Ryan Culkin, Kyle Rawlins, Benjamin Van Durme

We present a novel document-level model for finding argument spans that fill an event's roles, connecting related ideas in sentence-level semantic role labeling and coreference resolution.

Coreference Resolution Semantic Role Labeling +1

Interactive Refinement of Cross-Lingual Word Embeddings

1 code implementation EMNLP 2020 Michelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, Jordan Boyd-Graber

Cross-lingual word embeddings transfer knowledge between languages: models trained on high-resource languages can predict in low-resource languages.

Active Learning General Classification +2

Bag-of-Words Transfer: Non-Contextual Techniques for Multi-Task Learning

no code implementations WS 2019 Seth Ebner, Felicity Wang, Benjamin Van Durme

Many architectures for multi-task learning (MTL) have been proposed to take advantage of transfer among tasks, often involving complex models and training procedures.

Multi-Task Learning Word Embeddings

Universal Decompositional Semantic Parsing

no code implementations ACL 2020 Elias Stengel-Eskin, Aaron Steven White, Sheng Zhang, Benjamin Van Durme

We introduce a transductive model for parsing into Universal Decompositional Semantics (UDS) representations, which jointly learns to map natural language utterances into UDS graph structures and annotate the graph with decompositional semantic attribute scores.

Semantic Parsing

Exact and/or Fast Nearest Neighbors

1 code implementation6 Oct 2019 Matthew Francis-Landau, Benjamin Van Durme

Prior methods for retrieval of nearest neighbors in high dimensions are fast and approximate--providing probabilistic guarantees of returning the correct answer--or slow and exact performing an exhaustive search.

Data Structures and Algorithms

Uncertain Natural Language Inference

no code implementations ACL 2020 Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, Benjamin Van Durme

We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments.

Learning-To-Rank Natural Language Inference

Broad-Coverage Semantic Parsing as Transduction

no code implementations IJCNLP 2019 Sheng Zhang, Xutai Ma, Kevin Duh, Benjamin Van Durme

We unify different broad-coverage semantic parsing tasks under a transduction paradigm, and propose an attention-based neural framework that incrementally builds a meaning representation via a sequence of semantic relations.

AMR Parsing UCCA Parsing

A Discriminative Neural Model for Cross-Lingual Word Alignment

no code implementations IJCNLP 2019 Elias Stengel-Eskin, Tzu-Ray Su, Matt Post, Benjamin Van Durme

We introduce a novel discriminative word alignment model, which we integrate into a Transformer-based machine translation model.

Machine Translation NER +2

Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference

1 code implementation ACL 2019 Yonatan Belinkov, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, Alexander M. Rush

In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise.

Natural Language Inference

Learning to Rank for Plausible Plausibility

no code implementations ACL 2019 Zhongyang Li, Tongfei Chen, Benjamin Van Durme

Researchers illustrate improvements in contextual encoding strategies via resultant performance on a battery of shared Natural Language Understanding (NLU) tasks.

Learning-To-Rank Natural Language Understanding

Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting

1 code implementation NAACL 2019 J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, Benjamin Van Durme

Lexically-constrained sequence decoding allows for explicit positive or negative phrase-based constraints to be placed on target output strings in generation tasks such as machine translation or monolingual text rewriting.

Data Augmentation Machine Translation +3

AMR Parsing as Sequence-to-Graph Transduction

1 code implementation ACL 2019 Sheng Zhang, Xutai Ma, Kevin Duh, Benjamin Van Durme

Our experimental results outperform all previously reported SMATCH scores, on both AMR 2. 0 (76. 3% F1 on LDC2017T10) and AMR 1. 0 (70. 2% F1 on LDC2014T12).

AMR Parsing

Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling

no code implementations ICLR 2019 Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen

Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018).

Language Modelling

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

no code implementations SEMEVAL 2019 Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick

Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.

CCG Supertagging Language Modelling +1

Fine-Grained Temporal Relation Extraction

no code implementations ACL 2019 Siddharth Vashishtha, Benjamin Van Durme, Aaron Steven White

We present a novel semantic framework for modeling temporal relations and event durations that maps pairs of events to real-valued scales.

Relation Extraction Transfer Learning

Decomposing Generalization: Models of Generic, Habitual, and Episodic Statements

no code implementations TACL 2019 Venkata Subrahmanyan Govindarajan, Benjamin Van Durme, Aaron Steven White

We present a novel semantic framework for modeling linguistic expressions of generalization---generic, habitual, and episodic statements---as combinations of simple, real-valued referential properties of predicates and their arguments.

Word Embeddings

Cross-lingual Decompositional Semantic Parsing

no code implementations EMNLP 2018 Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh, Benjamin Van Durme

We introduce the task of cross-lingual decompositional semantic parsing: mapping content provided in a source language into a decompositional semantic analysis based on a target language.

Semantic Parsing

Lexicosyntactic Inference in Neural Models

no code implementations EMNLP 2018 Aaron Steven White, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme

We use this dataset, which we make publicly available, to probe the behavior of current state-of-the-art neural systems, showing that these systems make certain systematic errors that are clearly visible through the lens of factuality prediction.

Efficient Online Scalar Annotation with Bounded Support

no code implementations ACL 2018 Keisuke Sakaguchi, Benjamin Van Durme

We describe a novel method for efficiently eliciting scalar annotations for dataset construction and system quality estimation by human judgments.

Halo: Learning Semantics-Aware Representations for Cross-Lingual Information Extraction

no code implementations SEMEVAL 2018 Hongyuan Mei, Sheng Zhang, Kevin Duh, Benjamin Van Durme

Cross-lingual information extraction (CLIE) is an important and challenging task, especially in low resource scenarios.

TAG

On the Evaluation of Semantic Phenomena in Neural Machine Translation Using Natural Language Inference

1 code implementation NAACL 2018 Adam Poliak, Yonatan Belinkov, James Glass, Benjamin Van Durme

We propose a process for investigating the extent to which sentence representations arising from neural machine translation (NMT) systems encode distinct semantic phenomena.

Machine Translation Natural Language Inference +1

Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation

no code implementations EMNLP (ACL) 2018 Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme

We present a large-scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning.

Natural Language Inference

Neural-Davidsonian Semantic Proto-role Labeling

1 code implementation EMNLP 2018 Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, Benjamin Van Durme

We present a model for semantic proto-role labeling (SPRL) using an adapted bidirectional LSTM encoding strategy that we call "Neural-Davidsonian": predicate-argument structure is represented as pairs of hidden states corresponding to predicate and argument head tokens of the input sequence.

Cross-lingual Semantic Parsing

no code implementations21 Apr 2018 Sheng Zhang, Kevin Duh, Benjamin Van Durme

We introduce the task of cross-lingual semantic parsing: mapping content provided in a source language into a meaning representation based on a target language.

Semantic Parsing

Neural models of factuality

1 code implementation NAACL 2018 Rachel Rudinger, Aaron Steven White, Benjamin Van Durme

We present two neural models for event factuality prediction, which yield significant performance gains over previous models on three event factuality datasets: FactBank, UW, and MEANTIME.

Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework

no code implementations IJCNLP 2017 Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme

We propose to unify a variety of existing semantic classification tasks, such as semantic role labeling, anaphora resolution, and paraphrase detection, under the heading of Recognizing Textual Entailment (RTE).

General Classification Image Captioning +2

Selective Decoding for Cross-lingual Open Information Extraction

no code implementations IJCNLP 2017 Sheng Zhang, Kevin Duh, Benjamin Van Durme

Cross-lingual open information extraction is the task of distilling facts from the source language into representations in the target language.

Machine Translation Open Information Extraction

Grammatical Error Correction with Neural Reinforcement Learning

no code implementations IJCNLP 2017 Keisuke Sakaguchi, Matt Post, Benjamin Van Durme

We propose a neural encoder-decoder model with reinforcement learning (NRL) for grammatical error correction (GEC).

Grammatical Error Correction

Pocket Knowledge Base Population

no code implementations ACL 2017 Travis Wolfe, Mark Dredze, Benjamin Van Durme

Existing Knowledge Base Population methods extract relations from a closed relational schema with limited coverage leading to sparse KBs.

Knowledge Base Population Open Information Extraction +1

Error-repair Dependency Parsing for Ungrammatical Texts

1 code implementation ACL 2017 Keisuke Sakaguchi, Matt Post, Benjamin Van Durme

We propose a new dependency parsing scheme which jointly parses a sentence and repairs grammatical errors by extending the non-directional transition-based formalism of Goldberg and Elhadad (2010) with three additional actions: SUBSTITUTE, DELETE, INSERT.

Dependency Parsing

Streaming Word Embeddings with the Space-Saving Algorithm

2 code implementations24 Apr 2017 Chandler May, Kevin Duh, Benjamin Van Durme, Ashwin Lall

We develop a streaming (one-pass, bounded-memory) word embedding algorithm based on the canonical skip-gram with negative sampling algorithm implemented in word2vec.

Word Embeddings

Social Bias in Elicited Natural Language Inferences

1 code implementation WS 2017 Rachel Rudinger, Ch May, ler, Benjamin Van Durme

We analyze the Stanford Natural Language Inference (SNLI) corpus in an investigation of bias and stereotyping in NLP data.

Language Modelling Natural Language Inference +1

The Semantic Proto-Role Linking Model

no code implementations EACL 2017 Aaron Steven White, Kyle Rawlins, Benjamin Van Durme

We propose the semantic proto-role linking model, which jointly induces both predicate-specific semantic roles and predicate-general semantic proto-roles based on semantic proto-role property likelihood judgments.

Semantic Role Labeling

Efficient, Compositional, Order-sensitive n-gram Embeddings

1 code implementation EACL 2017 Adam Poliak, Pushpendre Rastogi, M. Patrick Martin, Benjamin Van Durme

We propose ECO: a new way to generate embeddings for phrases that is Efficient, Compositional, and Order-sensitive.

Word Embeddings

Discriminative Information Retrieval for Question Answering Sentence Selection

1 code implementation EACL 2017 Tongfei Chen, Benjamin Van Durme

We propose a framework for discriminative IR atop linguistic features, trained to improve the recall of answer candidate passage retrieval, the initial step in text-based question answering.

Information Retrieval Passage Retrieval +1

Feature Generation for Robust Semantic Role Labeling

no code implementations22 Feb 2017 Travis Wolfe, Mark Dredze, Benjamin Van Durme

Hand-engineered feature sets are a well understood method for creating robust NLP models, but they require a lot of expertise and effort to create.

Semantic Role Labeling

Ordinal Common-sense Inference

no code implementations TACL 2017 Sheng Zhang, Rachel Rudinger, Kevin Duh, Benjamin Van Durme

Humans have the capacity to draw common-sense inferences from natural language: various things that are likely but not certain to hold based on established discourse, and are rarely stated explicitly.

Common Sense Reasoning Natural Language Inference

Computational linking theory

no code implementations8 Oct 2016 Aaron Steven White, Drew Reisinger, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme

A linking theory explains how verbs' semantic arguments are mapped to their syntactic arguments---the inverse of the Semantic Role Labeling task from the shallow semantic parsing literature.

Semantic Parsing Semantic Role Labeling

Robsut Wrod Reocginiton via semi-Character Recurrent Neural Network

1 code implementation7 Aug 2016 Keisuke Sakaguchi, Kevin Duh, Matt Post, Benjamin Van Durme

Inspired by the findings from the Cmabrigde Uinervtisy effect, we propose a word recognition model based on a semi-character level recurrent neural network (scRNN).

Spelling Correction

A Critical Examination of RESCAL for Completion of Knowledge Bases with Transitive Relations

no code implementations16 May 2016 Pushpendre Rastogi, Benjamin Van Durme

Link prediction in large knowledge graphs has received a lot of attention recently because of its importance for inferring missing relations and for completing and improving noisily extracted knowledge graphs.

Knowledge Graphs Link Prediction

Sublinear Partition Estimation

2 code implementations7 Aug 2015 Pushpendre Rastogi, Benjamin Van Durme

The output scores of a neural network classifier are converted to probabilities via normalizing over the scores of all competing categories.

Language Modelling Object Recognition

Interactive Knowledge Base Population

no code implementations31 May 2015 Travis Wolfe, Mark Dredze, James Mayfield, Paul McNamee, Craig Harman, Tim Finin, Benjamin Van Durme

Most work on building knowledge bases has focused on collecting entities and facts from as large a collection of documents as possible.

Knowledge Base Population

Semantic Proto-Roles

no code implementations TACL 2015 Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, Benjamin Van Durme

We present the first large-scale, corpus based verification of Dowty{'}s seminal theory of proto-roles.

Semantic Role Labeling

A Wikipedia-based Corpus for Contextualized Machine Translation

no code implementations LREC 2014 Jennifer Drexler, Pushpendre Rastogi, Jacqueline Aguilar, Benjamin Van Durme, Matt Post

We describe a corpus for target-contextualized machine translation (MT), where the task is to improve the translation of source documents using language models built over presumably related documents in the target language.

Domain Adaptation Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.