Recent advances in language modeling using recurrent neural networks have made it viable to model language as distributions over characters.
SOTA for Chunking on Penn Treebank
We investigate the design challenges of constructing effective and efficient neural sequence labeling systems, by reproducing twelve neural sequence labeling models, which include most of the state-of-the-art structures, and conduct a systematic model comparison on three benchmarks (i. e. NER, Chunking, and POS tagging).
In this paper, we describe TextEnt, a neural network model that learns distributed representations of entities and documents directly from a knowledge base (KB).
Censorship of Internet content in China is understood to operate through a system of intermediary liability whereby service providers are liable for the content on their platforms.
In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks.
Further analysis of experimental results demonstrates that the proposed methods not only capture the correlations between labels, but also select the most informative words automatically when predicting different labels.
In this paper, we formulate previous utterances into context using a proposed deep utterance aggregation model to form a fine-grained context representation.
#9 best model for Conversational Response Selection on Ubuntu Dialogue (v1, Ranking)
A bottleneck problem with Chinese named entity recognition (NER) in new domains is the lack of annotated data.