no code implementations • COLING (CRAC) 2022 • Patrick Xia, Benjamin Van Durme
Humans process natural language online, whether reading a document or participating in multiparty dialogue.
1 code implementation • NAACL (Wordplay) 2022 • Ryan Volum, Sudha Rao, Michael Xu, Gabriel DesGarennes, Chris Brockett, Benjamin Van Durme, Olivia Deng, Akanksha Malhotra, Bill Dolan
In this work, we demonstrate that use of a few example conversational prompts can power a conversational agent to generate both natural language and novel code.
no code implementations • ACL 2022 • Anton Belyy, Chieh-Yang Huang, Jacob Andreas, Emmanouil Antonios Platanios, Sam Thomson, Richard Shin, Subhro Roy, Aleksandr Nisnevich, Charles Chen, Benjamin Van Durme
Collecting data for conversational semantic parsing is a time-consuming and demanding process.
1 code implementation • *SEM (NAACL) 2022 • Andrew Blair-Stanek, Benjamin Van Durme
The standard approach for inducing narrative chains considers statistics gathered per individual document.
1 code implementation • 1 Jun 2023 • Elias Stengel-Eskin, Kyle Rawlins, Benjamin Van Durme
We attempt to address this shortcoming by introducing AmP, a framework, dataset, and challenge for parsing with linguistic ambiguity.
no code implementations • 24 May 2023 • Ishani Mondal, Michelle Yuan, Anandhavelu N, Aparna Garimella, Francis Ferraro, Andrew Blair-Stanek, Benjamin Van Durme, Jordan Boyd-Graber
Learning template based information extraction from documents is a crucial yet difficult task.
no code implementations • 23 May 2023 • Haoran Xu, Weiting Tan, Shuyue Stella Li, Yunmo Chen, Benjamin Van Durme, Philipp Koehn, Kenton Murray
Incorporating language-specific (LS) modules is a proven method to boost performance in multilingual machine translation.
no code implementations • 22 May 2023 • Orion Weller, Marc Marone, Nathaniel Weir, Dawn Lawrie, Daniel Khashabi, Benjamin Van Durme
Large Language Models (LLMs) may hallucinate and generate fake information, despite pre-training on factual data.
no code implementations • 12 May 2023 • Orion Weller, Dawn Lawrie, Benjamin Van Durme
Although the Information Retrieval (IR) community has adopted LMs as the backbone of modern IR architectures, there has been little to no research in understanding how negation impacts neural IR.
no code implementations • 29 Mar 2023 • Elias Stengel-Eskin, Benjamin Van Durme
We then examine how confidence scores can help optimize the trade-off between usability and safety.
no code implementations • 6 Mar 2023 • Marc Marone, Benjamin Van Durme
Foundation models are trained on increasingly immense and opaque datasets.
1 code implementation • 13 Feb 2023 • Andrew Blair-Stanek, Nils Holzenberger, Benjamin Van Durme
Statutory reasoning is the task of reasoning with facts and statutes, which are rules written in natural language by a legislature.
no code implementations • 20 Dec 2022 • Nathaniel Weir, Ryan Thomas, Randolph D'Amore, Kellie Hill, Benjamin Van Durme, Harsh Jhamtani
We introduce a language generation task grounded in a popular video game environment.
no code implementations • 20 Dec 2022 • Orion Weller, Aleem Khan, Nathaniel Weir, Dawn Lawrie, Benjamin Van Durme
Recent work in open-domain question answering (ODQA) has shown that adversarial poisoning of the search collection can cause large drops in accuracy for production systems.
no code implementations • 20 Dec 2022 • Kangda Wei, Dawn Lawrie, Benjamin Van Durme, Yunmo Chen, Orion Weller
Answering complex questions often requires multi-step reasoning in order to obtain the final answer.
1 code implementation • CVPR 2023 • Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, Alan Yuille
Visual Question Answering (VQA) models often perform poorly on out-of-distribution data and struggle on domain generalization.
no code implementations • 1 Dec 2022 • Zhuowan Li, Cihang Xie, Benjamin Van Durme, Alan Yuille
In this work, we investigate how language can help with visual representation learning from a probing perspective.
1 code implementation • 14 Nov 2022 • Elias Stengel-Eskin, Jimena Guallar-Blasco, Yi Zhou, Benjamin Van Durme
Natural language is ambiguous.
2 code implementations • 14 Nov 2022 • Elias Stengel-Eskin, Benjamin Van Durme
Sequence generation models are increasingly being used to translate language into executable programs, i. e. to perform executable semantic parsing.
no code implementations • 20 Oct 2022 • Yukun Feng, Patrick Xia, Benjamin Van Durme, João Sedoc
Building pretrained language models is considered expensive and data-intensive, but must we increase dataset size to achieve better performance?
no code implementations • 13 Oct 2022 • Weiwei Gu, Boyuan Zheng, Yunmo Chen, Tongfei Chen, Benjamin Van Durme
We present an empirical study on methods for span finding, the selection of consecutive tokens in text for some downstream tasks.
1 code implementation • 12 Oct 2022 • Yunmo Chen, William Gantt, Weiwei Gu, Tongfei Chen, Aaron Steven White, Benjamin Van Durme
We present a novel iterative extraction model, IterX, for extracting complex relations, or templates (i. e., N-tuples representing a mapping from named slots to spans of text) within a document.
no code implementations • 6 Oct 2022 • Kate Sanders, Reno Kriz, Anqi Liu, Benjamin Van Durme
However, humans are frequently presented with visual data that they cannot classify with 100% certainty, and models trained on standard vision benchmarks achieve low performance when evaluated on this data.
no code implementations • 16 Sep 2022 • Nathaniel Weir, Benjamin Van Durme
We propose an approach for systematic reasoning that produces human interpretable proof trees grounded in a factbase.
no code implementations • 2 Aug 2022 • Boyuan Zheng, Patrick Xia, Mahsa Yarmohammadi, Benjamin Van Durme
Existing multiparty dialogue datasets for coreference resolution are nascent, and many challenges are still unaddressed.
1 code implementation • RepL4NLP (ACL) 2022 • Shijie Wu, Benjamin Van Durme, Mark Dredze
Pretrained multilingual encoders enable zero-shot cross-lingual transfer, but often produce unreliable models that exhibit high performance variance on the target language.
1 code implementation • 21 Jun 2022 • Subhro Roy, Sam Thomson, Tongfei Chen, Richard Shin, Adam Pauls, Jason Eisner, Benjamin Van Durme
We introduce BenchCLAMP, a Benchmark to evaluate Constrained LAnguage Model Parsing, which produces semantic outputs based on the analysis of input text through constrained decoding of a prompted or fine-tuned language model.
1 code implementation • NAACL 2022 • Orion Weller, Marc Marone, Vladimir Braverman, Dawn Lawrie, Benjamin Van Durme
Since the advent of Federated Learning (FL), research has applied these methods to natural language processing (NLP) tasks.
no code implementations • 25 May 2022 • Nils Holzenberger, Yunmo Chen, Benjamin Van Durme
Information Extraction (IE) researchers are mapping tasks to Question Answering (QA) in order to leverage existing large QA resources, and thereby improve data efficiency.
1 code implementation • 24 May 2022 • Elias Stengel-Eskin, Emmanouil Antonios Platanios, Adam Pauls, Sam Thomson, Hao Fang, Benjamin Van Durme, Jason Eisner, Yu Su
Rejecting class imbalance as the sole culprit, we reveal that the trend is closely associated with an effect we call source signal dilution, where strong lexical cues for the new symbol become diluted as the training dataset grows.
1 code implementation • 24 May 2022 • Elias Stengel-Eskin, Benjamin Van Durme
Given the advanced fluency of large generative language models, we ask whether model outputs are consistent with these heuristics, and to what degree different models are consistent with each other.
no code implementations • Findings (ACL) 2022 • Kevin Yang, Olivia Deng, Charles Chen, Richard Shin, Subhro Roy, Benjamin Van Durme
We introduce a novel setup for low-resource task-oriented semantic parsing which incorporates several constraints that may arise in real-world scenarios: (1) lack of similar datasets/models from a related domain, (2) inability to sample useful logical forms directly from a grammar, and (3) privacy requirements for unlabeled natural utterances.
1 code implementation • NAACL 2022 • Chenyu Zhang, Benjamin Van Durme, Zhuowan Li, Elias Stengel-Eskin
Our commonsense knowledge about objects includes their typical visual attributes; we know that bananas are typically yellow or green, and not purple.
Ranked #1 on
Visual Commonsense Tests
on ViComTe-color
no code implementations • 9 Mar 2022 • Nathaniel Weir, Xingdi Yuan, Marc-Alexandre Côté, Matthew Hausknecht, Romain Laroche, Ida Momennejad, Harm van Seijen, Benjamin Van Durme
Humans have the capability, aided by the expressive compositionality of their language, to learn quickly by demonstration.
no code implementations • 16 Feb 2022 • Guanghui Qin, Yukun Feng, Benjamin Van Durme
Transformer models cannot easily scale to long sequences due to their O(N^2) time and space complexity.
no code implementations • NAACL 2022 • Richard Shin, Benjamin Van Durme
Intuitively, such models can more easily output canonical utterances as they are closer to the natural language used for pre-training.
1 code implementation • ICCV 2021 • Zhuowan Li, Elias Stengel-Eskin, Yixiao Zhang, Cihang Xie, Quan Tran, Benjamin Van Durme, Alan Yuille
Our experiments show CCO substantially boosts the performance of neural symbolic methods on real images.
2 code implementations • EMNLP 2021 • Mahsa Yarmohammadi, Shijie Wu, Marc Marone, Haoran Xu, Seth Ebner, Guanghui Qin, Yunmo Chen, Jialiang Guo, Craig Harman, Kenton Murray, Aaron Steven White, Mark Dredze, Benjamin Van Durme
Zero-shot cross-lingual information extraction (IE) describes the construction of an IE model for some target language, given existing annotations exclusively in some other language, typically English.
2 code implementations • EMNLP 2021 • Haoran Xu, Benjamin Van Durme, Kenton Murray
The success of bidirectional encoders using masked language models, such as BERT, on numerous natural language processing tasks has prompted researchers to attempt to incorporate these pre-trained models into neural machine translation (NMT) systems.
Ranked #1 on
Machine Translation
on IWSLT2014 German-English
no code implementations • 21 Jul 2021 • Zhongyang Li, Xiao Ding, Ting Liu, J. Edward Hu, Benjamin Van Durme
We present a conditional text generation framework that posits sentential expressions of possible causes and effects.
1 code implementation • ACL 2021 • Nils Holzenberger, Benjamin Van Durme
Statutory reasoning is the task of determining whether a legal statute, stated in natural language, applies to the text description of a case.
1 code implementation • EMNLP 2021 • Richard Shin, Christopher H. Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, Benjamin Van Durme
We explore the use of large pretrained language models as few-shot semantic parsers.
1 code implementation • LREC (LAW) 2022 • Noah Weber, Anton Belyy, Nils Holzenberger, Rachel Rudinger, Benjamin Van Durme
Event schemas are structured knowledge sources defining typical real-world scenarios (e. g., going to an airport).
2 code implementations • EMNLP 2021 • Patrick Xia, Benjamin Van Durme
Academic neural models for coreference resolution (coref) are typically trained on a single dataset, OntoNotes, and model improvements are benchmarked on that same dataset.
1 code implementation • ACL 2022 • Michelle Yuan, Patrick Xia, Chandler May, Benjamin Van Durme, Jordan Boyd-Graber
Active learning mitigates this problem by sampling a small subset of data for annotators to label.
1 code implementation • 12 Apr 2021 • Elias Stengel-Eskin, Kenton Murray, Sheng Zhang, Aaron Steven White, Benjamin Van Durme
While numerous attempts have been made to jointly parse syntax and semantics, high performance in one domain typically comes at the price of performance in the other.
no code implementations • Joint Conference on Lexical and Computational Semantics 2021 • Jiefu Ou, Nathaniel Weir, Anton Belyy, Felix Yu, Benjamin Van Durme
We propose a structured extension to bidirectional-context conditional language generation, or "infilling," inspired by Frame Semantic theory (Fillmore, 1976).
2 code implementations • EACL (AdaptNLP) 2021 • Haoran Xu, Seth Ebner, Mahsa Yarmohammadi, Aaron Steven White, Benjamin Van Durme, Kenton Murray
Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain.
no code implementations • EACL 2021 • Patrick Xia, Guanghui Qin, Siddharth Vashishtha, Yunmo Chen, Tongfei Chen, Chandler May, Craig Harman, Kyle Rawlins, Aaron Steven White, Benjamin Van Durme
We present LOME, a system for performing multilingual information extraction.
1 code implementation • 20 Nov 2020 • Yunmo Chen, Tongfei Chen, Benjamin Van Durme
We recognize the task of event argument linking in documents as similar to that of intent slot resolution in dialogue, providing a Transformer-based model that extends from a recently proposed solution to resolve references to slots.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Siddharth Vashishtha, Adam Poliak, Yash Kumar Lal, Benjamin Van Durme, Aaron Steven White
We introduce five new natural language inference (NLI) datasets focused on temporal reasoning.
no code implementations • EMNLP (spnlp) 2020 • Abhinav Singh, Patrick Xia, Guanghui Qin, Mahsa Yarmohammadi, Benjamin Van Durme
Copy mechanisms are employed in sequence to sequence models (seq2seq) to generate reproductions of words from the input to the output.
1 code implementation • EMNLP 2020 • Nathaniel Weir, João Sedoc, Benjamin Van Durme
We present COD3S, a novel method for generating semantically diverse sentences using neural sequence-to-sequence (seq2seq) models.
no code implementations • EMNLP 2020 • Patrick Xia, Shijie Wu, Benjamin Van Durme
Pretrained contextualized text encoders are now a staple of the NLP community.
no code implementations • WS 2020 • Anton Belyy, Benjamin Van Durme
We show that the count-based Script Induction models of Chambers and Jurafsky (2008) and Jans et al. (2012) can be unified in a general framework of narrative chain likelihood maximization.
no code implementations • 1 Jul 2020 • Ryan Culkin, J. Edward Hu, Elias Stengel-Eskin, Guanghui Qin, Benjamin Van Durme
We introduce a novel paraphrastic augmentation strategy based on sentence-level lexically constrained paraphrasing and discriminative span alignment.
1 code implementation • 11 May 2020 • Nils Holzenberger, Andrew Blair-Stanek, Benjamin Van Durme
Legislation can be viewed as a body of prescriptive rules expressed in natural language.
1 code implementation • EMNLP 2020 • Patrick Xia, João Sedoc, Benjamin Van Durme
We investigate modeling coreference resolution under a fixed memory constraint by extending an incremental clustering algorithm to utilize contextualized encoders and neural components.
no code implementations • 29 Apr 2020 • Luyu Gao, Zhuyun Dai, Tongfei Chen, Zhen Fan, Benjamin Van Durme, Jamie Callan
This paper presents CLEAR, a retrieval model that seeks to complement classical lexical exact-match models such as BM25 with semantic matching signals from a neural embedding matching model.
no code implementations • 10 Apr 2020 • Nathaniel Weir, Adam Poliak, Benjamin Van Durme
Our prompts are based on human responses in a psychological study of conceptual associations.
1 code implementation • ACL 2020 • Tongfei Chen, Yunmo Chen, Benjamin Van Durme
We propose a novel method for hierarchical entity classification that embraces ontological structure at both training and during prediction.
no code implementations • EMNLP 2020 • Noah Weber, Rachel Rudinger, Benjamin Van Durme
When does a sequence of events define an everyday scenario and how can this knowledge be induced from text?
no code implementations • EMNLP (spnlp) 2020 • Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, Benjamin Van Durme
We ask whether text understanding has progressed to where we may extract event information through incremental refinement of bleached statements derived from annotation manuals.
no code implementations • ACL 2020 • Seth Ebner, Patrick Xia, Ryan Culkin, Kyle Rawlins, Benjamin Van Durme
We present a novel document-level model for finding argument spans that fill an event's roles, connecting related ideas in sentence-level semantic role labeling and coreference resolution.
1 code implementation • EMNLP 2020 • Michelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, Jordan Boyd-Graber
Cross-lingual word embeddings transfer knowledge between languages: models trained on high-resource languages can predict in low-resource languages.
no code implementations • WS 2019 • Seth Ebner, Felicity Wang, Benjamin Van Durme
Many architectures for multi-task learning (MTL) have been proposed to take advantage of transfer among tasks, often involving complex models and training procedures.
no code implementations • CONLL 2019 • J. Edward Hu, Abhinav Singh, Nils Holzenberger, Matt Post, Benjamin Van Durme
Producing diverse paraphrases of a sentence is a challenging task.
no code implementations • ACL 2020 • Elias Stengel-Eskin, Aaron Steven White, Sheng Zhang, Benjamin Van Durme
We introduce a transductive model for parsing into Universal Decompositional Semantics (UDS) representations, which jointly learns to map natural language utterances into UDS graph structures and annotate the graph with decompositional semantic attribute scores.
1 code implementation • 6 Oct 2019 • Matthew Francis-Landau, Benjamin Van Durme
Prior methods for retrieval of nearest neighbors in high dimensions are fast and approximate--providing probabilistic guarantees of returning the correct answer--or slow and exact performing an exhaustive search.
Data Structures and Algorithms
1 code implementation • LREC 2020 • Aaron Steven White, Elias Stengel-Eskin, Siddharth Vashishtha, Venkata Govindarajan, Dee Ann Reisinger, Tim Vieira, Keisuke Sakaguchi, Sheng Zhang, Francis Ferraro, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
We present the Universal Decompositional Semantics (UDS) dataset (v1. 0), which is bundled with the Decomp toolkit (v0. 1).
no code implementations • ACL 2020 • Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, Benjamin Van Durme
We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments.
no code implementations • IJCNLP 2019 • Sheng Zhang, Xutai Ma, Kevin Duh, Benjamin Van Durme
We unify different broad-coverage semantic parsing tasks under a transduction paradigm, and propose an attention-based neural framework that incrementally builds a meaning representation via a sequence of semantic relations.
Ranked #2 on
UCCA Parsing
on SemEval 2019 Task 1
no code implementations • IJCNLP 2019 • Elias Stengel-Eskin, Tzu-Ray Su, Matt Post, Benjamin Van Durme
We introduce a novel discriminative word alignment model, which we integrate into a Transformer-based machine translation model.
1 code implementation • SEMEVAL 2019 • Yonatan Belinkov, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, Alexander M. Rush
Popular Natural Language Inference (NLI) datasets have been shown to be tainted by hypothesis-only biases.
1 code implementation • ACL 2019 • Yonatan Belinkov, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, Alexander M. Rush
In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise.
no code implementations • ACL 2019 • Zhongyang Li, Tongfei Chen, Benjamin Van Durme
Researchers illustrate improvements in contextual encoding strategies via resultant performance on a battery of shared Natural Language Understanding (NLU) tasks.
1 code implementation • NAACL 2019 • J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, Benjamin Van Durme
Lexically-constrained sequence decoding allows for explicit positive or negative phrase-based constraints to be placed on target output strings in generation tasks such as machine translation or monolingual text rewriting.
1 code implementation • ACL 2019 • Sheng Zhang, Xutai Ma, Kevin Duh, Benjamin Van Durme
Our experimental results outperform all previously reported SMATCH scores, on both AMR 2. 0 (76. 3% F1 on LDC2017T10) and AMR 1. 0 (70. 2% F1 on LDC2014T12).
Ranked #1 on
AMR Parsing
on LDC2014T12:
2 code implementations • ICLR 2019 • Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, Ellie Pavlick
The jiant toolkit for general-purpose text understanding models
no code implementations • ICLR 2019 • Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen
Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018).
no code implementations • SEMEVAL 2019 • Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick
Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.
no code implementations • ACL 2019 • Siddharth Vashishtha, Benjamin Van Durme, Aaron Steven White
We present a novel semantic framework for modeling temporal relations and event durations that maps pairs of events to real-valued scales.
no code implementations • TACL 2019 • Venkata Subrahmanyan Govindarajan, Benjamin Van Durme, Aaron Steven White
We present a novel semantic framework for modeling linguistic expressions of generalization---generic, habitual, and episodic statements---as combinations of simple, real-valued referential properties of predicates and their arguments.
no code implementations • 11 Jan 2019 • J. Edward Hu, Rachel Rudinger, Matt Post, Benjamin Van Durme
We present ParaBank, a large-scale English paraphrase dataset that surpasses prior work in both quantity and quality.
no code implementations • ACL 2019 • Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, Samuel R. Bowman
Natural language understanding has recently seen a surge of progress with the use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) which are pretrained on variants of language modeling.
1 code implementation • NeurIPS 2018 • Michelle Yuan, Benjamin Van Durme, Jordan L. Ying
Multilingual topic models can reveal patterns in cross-lingual document collections.
no code implementations • 30 Oct 2018 • Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, Benjamin Van Durme
We present a large-scale dataset, ReCoRD, for machine reading comprehension requiring commonsense reasoning.
no code implementations • EMNLP 2018 • Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh, Benjamin Van Durme
We introduce the task of cross-lingual decompositional semantic parsing: mapping content provided in a source language into a decompositional semantic analysis based on a target language.
no code implementations • 20 Sep 2018 • Najoung Kim, Kyle Rawlins, Benjamin Van Durme, Paul Smolensky
Distinguishing between arguments and adjuncts of a verb is a longstanding, nontrivial problem.
no code implementations • EMNLP 2018 • Aaron Steven White, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
We use this dataset, which we make publicly available, to probe the behavior of current state-of-the-art neural systems, showing that these systems make certain systematic errors that are clearly visible through the lens of factuality prediction.
no code implementations • ACL 2018 • Keisuke Sakaguchi, Benjamin Van Durme
We describe a novel method for efficiently eliciting scalar annotations for dataset construction and system quality estimation by human judgments.
no code implementations • SEMEVAL 2018 • Hongyuan Mei, Sheng Zhang, Kevin Duh, Benjamin Van Durme
Cross-lingual information extraction (CLIE) is an important and challenging task, especially in low resource scenarios.
1 code implementation • SEMEVAL 2018 • Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme
We propose a hypothesis only baseline for diagnosing Natural Language Inference (NLI).
1 code implementation • NAACL 2018 • Adam Poliak, Yonatan Belinkov, James Glass, Benjamin Van Durme
We propose a process for investigating the extent to which sentence representations arising from neural machine translation (NMT) systems encode distinct semantic phenomena.
2 code implementations • NAACL 2018 • Rachel Rudinger, Jason Naradowsky, Brian Leonard, Benjamin Van Durme
We present an empirical study of gender bias in coreference resolution systems.
no code implementations • EMNLP (ACL) 2018 • Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme
We present a large-scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning.
1 code implementation • EMNLP 2018 • Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, Benjamin Van Durme
We present a model for semantic proto-role labeling (SPRL) using an adapted bidirectional LSTM encoding strategy that we call "Neural-Davidsonian": predicate-argument structure is represented as pairs of hidden states corresponding to predicate and argument head tokens of the input sequence.
1 code implementation • SEMEVAL 2018 • Sheng Zhang, Kevin Duh, Benjamin Van Durme
Fine-grained entity typing is the task of assigning fine-grained semantic types to entity mentions.
no code implementations • 21 Apr 2018 • Sheng Zhang, Kevin Duh, Benjamin Van Durme
We introduce the task of cross-lingual semantic parsing: mapping content provided in a source language into a meaning representation based on a target language.
1 code implementation • NAACL 2018 • Rachel Rudinger, Aaron Steven White, Benjamin Van Durme
We present two neural models for event factuality prediction, which yield significant performance gains over previous models on three event factuality datasets: FactBank, UW, and MEANTIME.
no code implementations • IJCNLP 2017 • Sheng Zhang, Kevin Duh, Benjamin Van Durme
Cross-lingual open information extraction is the task of distilling facts from the source language into representations in the target language.
no code implementations • IJCNLP 2017 • Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme
We propose to unify a variety of existing semantic classification tasks, such as semantic role labeling, anaphora resolution, and paraphrase detection, under the heading of Recognizing Textual Entailment (RTE).
no code implementations • IJCNLP 2017 • Benjamin Van Durme, Tom Lippincott, Kevin Duh, Deana Burchfield, Adam Poliak, Cash Costello, Tim Finin, Scott Miller, James Mayfield, Philipp Koehn, Craig Harman, Dawn Lawrie, Ch May, ler, Max Thomas, Annabelle Carrell, Julianne Chaloux, Tongfei Chen, Alex Comerford, Mark Dredze, Benjamin Glass, Shudong Hao, Patrick Martin, Pushpendre Rastogi, Rashmi Sankepally, Travis Wolfe, Ying-Ying Tran, Ted Zhang
It combines a multitude of analytics together with a flexible environment for customizing the workflow for different users.
no code implementations • IJCNLP 2017 • Keisuke Sakaguchi, Matt Post, Benjamin Van Durme
We propose a neural encoder-decoder model with reinforcement learning (NRL) for grammatical error correction (GEC).
1 code implementation • ACL 2017 • Keisuke Sakaguchi, Matt Post, Benjamin Van Durme
We propose a new dependency parsing scheme which jointly parses a sentence and repairs grammatical errors by extending the non-directional transition-based formalism of Goldberg and Elhadad (2010) with three additional actions: SUBSTITUTE, DELETE, INSERT.
no code implementations • ACL 2017 • Travis Wolfe, Mark Dredze, Benjamin Van Durme
Existing Knowledge Base Population methods extract relations from a closed relational schema with limited coverage leading to sparse KBs.
no code implementations • ACL 2017 • Nicholas Andrews, Mark Dredze, Benjamin Van Durme, Jason Eisner
Practically, this means that we may treat the lexical resources as observations under the proposed generative model.
Low Resource Named Entity Recognition
named-entity-recognition
+2
1 code implementation • SEMEVAL 2017 • Francis Ferraro, Adam Poliak, Ryan Cotterell, Benjamin Van Durme
We study how different frame annotations complement one another when learning continuous lexical semantics.
2 code implementations • 24 Apr 2017 • Chandler May, Kevin Duh, Benjamin Van Durme, Ashwin Lall
We develop a streaming (one-pass, bounded-memory) word embedding algorithm based on the canonical skip-gram with negative sampling algorithm implemented in word2vec.
1 code implementation • EACL 2017 • Tongfei Chen, Benjamin Van Durme
We propose a framework for discriminative IR atop linguistic features, trained to improve the recall of answer candidate passage retrieval, the initial step in text-based question answering.
1 code implementation • WS 2017 • Rachel Rudinger, Ch May, ler, Benjamin Van Durme
We analyze the Stanford Natural Language Inference (SNLI) corpus in an investigation of bias and stereotyping in NLP data.
1 code implementation • EACL 2017 • Adam Poliak, Pushpendre Rastogi, M. Patrick Martin, Benjamin Van Durme
We propose ECO: a new way to generate embeddings for phrases that is Efficient, Compositional, and Order-sensitive.
no code implementations • EACL 2017 • Ryan Cotterell, Adam Poliak, Benjamin Van Durme, Jason Eisner
The popular skip-gram model induces word embeddings by exploiting the signal from word-context coocurrence.
no code implementations • EACL 2017 • Aaron Steven White, Kyle Rawlins, Benjamin Van Durme
We propose the semantic proto-role linking model, which jointly induces both predicate-specific semantic roles and predicate-general semantic proto-roles based on semantic proto-role property likelihood judgments.
no code implementations • EACL 2017 • Sheng Zhang, Kevin Duh, Benjamin Van Durme
Conventional pipeline solutions decompose the task as machine translation followed by information extraction (or vice versa).
no code implementations • 22 Feb 2017 • Travis Wolfe, Mark Dredze, Benjamin Van Durme
Hand-engineered feature sets are a well understood method for creating robust NLP models, but they require a lot of expertise and effort to create.
no code implementations • TACL 2017 • Sheng Zhang, Rachel Rudinger, Kevin Duh, Benjamin Van Durme
Humans have the capacity to draw common-sense inferences from natural language: various things that are likely but not certain to hold based on established discourse, and are rarely stated explicitly.
no code implementations • 8 Oct 2016 • Aaron Steven White, Drew Reisinger, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
A linking theory explains how verbs' semantic arguments are mapped to their syntactic arguments---the inverse of the Semantic Role Labeling task from the shallow semantic parsing literature.
no code implementations • 13 Aug 2016 • Chandler May, Ryan Cotterell, Benjamin Van Durme
Topic models are typically represented by top-$m$ word lists for human interpretation.
1 code implementation • 7 Aug 2016 • Keisuke Sakaguchi, Kevin Duh, Matt Post, Benjamin Van Durme
Inspired by the findings from the Cmabrigde Uinervtisy effect, we propose a word recognition model based on a semi-character level recurrent neural network (scRNN).
no code implementations • 16 May 2016 • Pushpendre Rastogi, Benjamin Van Durme
Link prediction in large knowledge graphs has received a lot of attention recently because of its importance for inferring missing relations and for completing and improving noisily extracted knowledge graphs.
2 code implementations • 7 Aug 2015 • Pushpendre Rastogi, Benjamin Van Durme
The output scores of a neural network classifier are converted to probabilities via normalizing over the scores of all competing categories.
no code implementations • 31 May 2015 • Travis Wolfe, Mark Dredze, James Mayfield, Paul McNamee, Craig Harman, Tim Finin, Benjamin Van Durme
Most work on building knowledge bases has focused on collecting entities and facts from as large a collection of documents as possible.
no code implementations • WS 2012 • Vinodkumar Prabhakaran, Michael Bloodgood, Mona Diab, Bonnie Dorr, Lori Levin, Christine D. Piatko, Owen Rambow, Benjamin Van Durme
We explore training an automatic modality tagger.
no code implementations • TACL 2015 • Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, Benjamin Van Durme
We present the first large-scale, corpus based verification of Dowty{'}s seminal theory of proto-roles.
no code implementations • LREC 2014 • Jennifer Drexler, Pushpendre Rastogi, Jacqueline Aguilar, Benjamin Van Durme, Matt Post
We describe a corpus for target-contextualized machine translation (MT), where the task is to improve the translation of source documents using language models built over presumably related documents in the target language.