Named Entity Recognition

428 papers with code • 37 benchmarks • 62 datasets

Named entity recognition (NER) is the task of tagging entities in text with their corresponding type. Approaches typically use BIO notation, which differentiates the beginning (B) and the inside (I) of entities. O is used for non-entity tokens.

Example:

Mark Watney visited Mars
B-PER I-PER O B-LOC

( Image credit: Zalando )

Greatest papers with code

Semi-Supervised Sequence Modeling with Cross-View Training

tensorflow/models EMNLP 2018

We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.

CCG Supertagging Dependency Parsing +5

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

tensorflow/models NAACL 2019

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.

Common Sense Reasoning Conversational Response Selection +6

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

huggingface/transformers ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

Common Sense Reasoning Coreference Resolution +9

FLERT: Document-Level Features for Named Entity Recognition

zalandoresearch/flair 13 Nov 2020

Current state-of-the-art approaches for named entity recognition (NER) typically consider text at the sentence-level and thus do not model information that crosses sentence boundaries.

Named Entity Recognition

Pooled Contextualized Embeddings for Named Entity Recognition

zalandoresearch/flair NAACL 2019

We make all code and pre-trained models available to the research community for use and reproduction.

Ranked #16 on Named Entity Recognition on CoNLL 2003 (English) (using extra training data)

Named Entity Recognition NER

FLAIR: An Easy-to-Use Framework for State-of-the-Art NLP

zalandoresearch/flair NAACL 2019

We present FLAIR, an NLP framework designed to facilitate training and distribution of state-of-the-art sequence labeling, text classification and language models.

Chunking Named Entity Recognition +1

Contextual String Embeddings for Sequence Labeling

zalandoresearch/flair COLING 2018

Recent advances in language modeling using recurrent neural networks have made it viable to model language as distributions over characters.

Chunking Language Modelling +3

Deep contextualized word representations

zalandoresearch/flair NAACL 2018

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).

Ranked #2 on Citation Intent Classification on ACL-ARC (using extra training data)

Citation Intent Classification Conversational Response Selection +7

Semi-supervised sequence tagging with bidirectional language models

flairNLP/flair ACL 2017

Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks.

Chunking Named Entity Recognition +1