Analogical reasoning is effective in capturing linguistic regularities.
We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon.
Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i. e., compresses and paraphrases) to generate a concise overall summary.
Recent work has managed to learn cross-lingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training.
We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input.
#2 best model for Question Answering on TriviaQA
We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-of-the-art discriminative constituency parser.
#2 best model for Constituency Parsing on Penn Treebank