NER
530 papers with code • 5 benchmarks • 22 datasets
The named entity recognition (NER) involves identification of key information in the text and classification into a set of predefined categories. This includes standard entities in the text like Part of Speech (PoS) and entities like places, names etc...
Libraries
Use these libraries to find NER models and implementationsDatasets
Most implemented papers
NEZHA: Neural Contextualized Representation for Chinese Language Understanding
The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.
CrossNER: Evaluating Cross-Domain Named Entity Recognition
Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains.
Fast and Accurate Entity Recognition with Iterated Dilated Convolutions
Today when many practitioners run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs.
Why Attention? Analyze BiLSTM Deficiency and Its Remedies in the Case of NER
We test the practical impacts of the deficiency on real-world NER datasets, OntoNotes 5. 0 and WNUT 2017, with clear and consistent improvements over the baseline, up to 8. 7% on some of the multi-token entity mentions.
Empower Sequence Labeling with Task-Aware Neural Language Model
In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task.
Chinese NER Using Lattice LSTM
We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon.
CLUENER2020: Fine-grained Named Entity Recognition Dataset and Benchmark for Chinese
In this paper, we introduce the NER dataset from CLUE organization (CLUENER2020), a well-defined fine-grained dataset for named entity recognition in Chinese.
AraBERT: Transformer-based Model for Arabic Language Understanding
Recently, with the surge of transformers based models, language-specific BERT based models have proven to be very efficient at language understanding, provided they are pre-trained on a very large corpus.
Parallel sequence tagging for concept recognition
In all 20 annotation sets of the concept-annotation task, our system outperforms the pipeline system reported as a baseline in the CRAFT shared task 2019.
Beheshti-NER: Persian Named Entity Recognition Using BERT
In this paper, we use the pre-trained deep bidirectional network, BERT, to make a model for named entity recognition in Persian.