Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks.
#7 best model for Named Entity Recognition on CoNLL 2003 (English)
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.
#120 best model for Question Answering on SQuAD1.1
End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors.
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).
We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset.
#3 best model for Grammatical Error Detection on FCE
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses.
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base.