WS 2016

Joint Learning of Sentence Embeddings for Relevance and Entailment

WS 2016 brmson/dataset-sts

We consider the problem of Recognizing Textual Entailment within an Information Retrieval context, where we must simultaneously determine the relevancy as well as degree of entailment for individual pieces of evidence to determine a yes/no answer to a binary natural language question.

DECISION MAKING INFORMATION RETRIEVAL NATURAL LANGUAGE INFERENCE READING COMPREHENSION SENTENCE EMBEDDINGS

An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation

WS 2016 jhlau/doc2vec

Recently, Le and Mikolov (2014) proposed doc2vec as an extension to word2vec (Mikolov et al., 2013a) to learn document-level embeddings.

DOCUMENT EMBEDDING WORD EMBEDDINGS

Does Multimodality Help Human and Machine for Translation and Image Captioning?

WS 2016 lium-lst/nmtpy

This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge.

IMAGE CAPTIONING MULTIMODAL MACHINE TRANSLATION

Linguistic Input Features Improve Neural Machine Translation

WS 2016 rsennrich/wmt16-scripts

Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information.

MACHINE TRANSLATION

Edinburgh Neural Machine Translation Systems for WMT 16

WS 2016 rsennrich/wmt16-scripts

We participated in the WMT 2016 shared news translation task by building neural translation systems for four language pairs, each trained in both directions: English<->Czech, English<->German, English<->Romanian and English<->Russian.

MACHINE TRANSLATION

This before That: Causal Precedence in the Biomedical Domain

WS 2016 clulab/reach

Causal precedence between biochemical interactions is crucial in the biomedical domain, because it transforms collections of individual interactions, e. g., bindings and phosphorylations, into the causal mechanisms needed to inform meaningful search and inference.

Towards cross-lingual distributed representations without parallel text trained with adversarial autoencoders

WS 2016 Avmb/clweadv

Current approaches to learning vector representations of text that are compatible between different languages usually require some amount of parallel text, aligned at word, sentence or at least document level.

Using Distributed Representations to Disambiguate Biomedical and Clinical Concepts

WS 2016 clips/yarn

In this paper, we report a knowledge-based method for Word Sense Disambiguation in the domains of biomedical and clinical text.

WORD SENSE DISAMBIGUATION