We consider the problem of Recognizing Textual Entailment within an Information Retrieval context, where we must simultaneously determine the relevancy as well as degree of entailment for individual pieces of evidence to determine a yes/no answer to a binary natural language question.
Recently, Le and Mikolov (2014) proposed doc2vec as an extension to word2vec (Mikolov et al., 2013a) to learn document-level embeddings.
This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge.
Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information.
#2 best model for Machine Translation on WMT2016 German-English
We participated in the WMT 2016 shared news translation task by building neural translation systems for four language pairs, each trained in both directions: English<->Czech, English<->German, English<->Romanian and English<->Russian.
Causal precedence between biochemical interactions is crucial in the biomedical domain, because it transforms collections of individual interactions, e. g., bindings and phosphorylations, into the causal mechanisms needed to inform meaningful search and inference.
Current approaches to learning vector representations of text that are compatible between different languages usually require some amount of parallel text, aligned at word, sentence or at least document level.
In this paper, we report a knowledge-based method for Word Sense Disambiguation in the domains of biomedical and clinical text.