named-entity-recognition
796 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in named-entity-recognition
Libraries
Use these libraries to find named-entity-recognition models and implementationsLatest papers with no code
Do "English" Named Entity Recognizers Work Well on Global Englishes?
We test widely used NER toolkits and transformer models, including models using the pre-trained contextual models RoBERTa and ELECTRA, on three datasets: a commonly used British English newswire dataset, CoNLL 2003, a more American focused dataset OntoNotes, and our global dataset.
A Continual Relation Extraction Approach for Knowledge Graph Completeness
Representing unstructured data in a structured form is most significant for information system management to analyze and interpret it.
Few-shot Name Entity Recognition on StackOverflow
StackOverflow, with its vast question repository and limited labeled examples, raise an annotation challenge for us.
ToNER: Type-oriented Named Entity Recognition with Generative Language Model
In recent years, the fine-tuned generative models have been proven more powerful than the previous tagging-based or span-based models on named entity recognition (NER) task.
Low-Resource Named Entity Recognition with Cross-Lingual, Character-Level Neural Conditional Random Fields
Low-resource named entity recognition is still an open problem in NLP.
Hybrid Multi-stage Decoding for Few-shot NER with Entity-aware Contrastive Learning
In the training process, we train and get the best entity-span detection model and the entity classification model separately on the source domain using meta-learning, where we create a contrastive learning module to enhance entity representations for entity classification.
LLMs in Biomedicine: A study on clinical Named Entity Recognition
Large Language Models (LLMs) demonstrate remarkable versatility in various NLP tasks but encounter distinct challenges in biomedicine due to medical language complexities and data scarcity.
ClinLinker: Medical Entity Linking of Clinical Concept Mentions in Spanish
This study presents ClinLinker, a novel approach employing a two-phase pipeline for medical entity linking that leverages the potential of in-domain adapted language models for biomedical text mining: initial candidate retrieval using a SapBERT-based bi-encoder and subsequent re-ranking with a cross-encoder, trained by following a contrastive-learning strategy to be tailored to medical concepts in Spanish.
Comprehensive Study on German Language Models for Clinical and Biomedical Text Understanding
Recent advances in natural language processing (NLP) can be largely attributed to the advent of pre-trained language models such as BERT and RoBERTa.
LTNER: Large Language Model Tagging for Named Entity Recognition with Contextualized Entity Marking
The use of LLMs for natural language processing has become a popular trend in the past two years, driven by their formidable capacity for context comprehension and learning, which has inspired a wave of research from academics and industry professionals.