In this work, we introduce a model and beam-search training scheme, based on the work of Daume III and Marcu (2005), that extends seq2seq to learn global sequence scores.
#14 best model for Machine Translation on IWSLT2015 German-English
This paper explores the task of translating natural language queries into regular expressions which embody their meaning.
Coreference resolution systems are typically trained with heuristic loss functions that require careful tuning.
Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations.
Ensembling and unknown word replacement add another 2 Bleu which brings the NMT performance on low-resource machine translation close to a strong syntax based machine translation (SBMT) system, exceeding its performance on one language pair.
A word's sentiment depends on the domain in which it is used.