Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks.
#22 best model for Named Entity Recognition on CoNLL 2003 (English)
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.
End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors.
The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence.
#10 best model for Machine Translation on IWSLT2015 German-English
We describe an open-source toolkit for neural machine translation (NMT).
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).
#6 best model for Abstractive Text Summarization on CNN / Daily Mail
We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations.
#2 best model for Predicate Detection on CoNLL 2005
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base.