End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors.
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).
We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset.
#3 best model for Grammatical Error Detection on FCE
In this paper, we address this limitation by replacing symbolic queries with an induced "soft" posterior distribution over the KB that indicates which entities the user is interested in.
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples.