Automatic interpretation of the relation between the constituents of a noun compound, e. g. olive oil (source) and baby oil (purpose) is an important task for many NLP applications.
On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1. 3 BLEU and 0. 3 BLEU over absolute position representations, respectively.
#4 best model for Machine Translation on WMT2014 English-German
There has been much recent work on training neural attention models at the sequence-level using either reinforcement learning-style methods or by optimizing the beam.
#5 best model for Machine Translation on IWSLT2015 German-English
We propose an unsupervised keyphrase extraction model that encodes topical information within a multipartite graph structure.
In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline.
Sentence pair modeling is critical for many NLP tasks, such as paraphrase identification, semantic textual similarity, and natural language inference.
In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective.
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding.
This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks.
We present a simple extension of the GloVe representation learning model that begins with general-purpose representations and updates them based on data from a specialized domain.