Named Entity Recognition (NER)

926 papers with code • 76 benchmarks • 127 datasets

Named Entity Recognition (NER) is a task of Natural Language Processing (NLP) that involves identifying and classifying named entities in a text into predefined categories such as person names, organizations, locations, and others. The goal of NER is to extract structured information from unstructured text data and represent it in a machine-readable format. Approaches typically use BIO notation, which differentiates the beginning (B) and the inside (I) of entities. O is used for non-entity tokens.

Example:

Mark Watney visited Mars
B-PER I-PER O B-LOC

( Image credit: Zalando )

Libraries

Use these libraries to find Named Entity Recognition (NER) models and implementations
6 papers
13,917
3 papers
2,556
See all 7 libraries.

Most implemented papers

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

google-research/bert NAACL 2019

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.

Deep contextualized word representations

flairNLP/flair NAACL 2018

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).

Neural Architectures for Named Entity Recognition

glample/tagger NAACL 2016

State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available.

Bidirectional LSTM-CRF Models for Sequence Tagging

determined22/zh-ner-tf 9 Aug 2015

It can also use sentence level tag information thanks to a CRF layer.

End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF

guillaumegenthial/sequence_tagging ACL 2016

State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing.

BioBERT: a pre-trained biomedical language representation model for biomedical text mining

dmis-lab/biobert 25 Jan 2019

Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows.

ERNIE: Enhanced Representation through Knowledge Integration

PaddlePaddle/PaddleNLP 19 Apr 2019

We present a novel language representation model enhanced by knowledge called ERNIE (Enhanced Representation through kNowledge IntEgration).

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

microsoft/DeBERTa ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

Named Entity Recognition with Bidirectional LSTM-CNNs

zalandoresearch/flair TACL 2016

Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance.

NEZHA: Neural Contextualized Representation for Chinese Language Understanding

PaddlePaddle/PaddleNLP 31 Aug 2019

The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.