Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks.
#20 best model for Named Entity Recognition on CoNLL 2003 (English)
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.
End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors.
The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence.
#11 best model for Machine Translation on WMT2016 English-Romanian
We describe an open-source toolkit for neural machine translation (NMT).
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).
#4 best model for Abstractive Text Summarization on CNN / Daily Mail
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base.
We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations.
#2 best model for Predicate Detection on CoNLL 2005