Neural Architectures for Named Entity Recognition

State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures---one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers.

PDF Abstract NAACL 2016 PDF NAACL 2016 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition (NER) CoNLL++ LSTM-CRF F1 91.47 # 8
Named Entity Recognition (NER) CoNLL 2003 (English) LSTM-CRF F1 90.94 # 67

Methods


No methods listed for this paper. Add relevant methods here