named-entity-recognition
790 papers with code • 2 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in named-entity-recognition
Trend | Dataset | Best Model | Paper | Code | Compare |
---|
Libraries
Use these libraries to find named-entity-recognition models and implementationsLatest papers with no code
Few-shot Name Entity Recognition on StackOverflow
StackOverflow, with its vast question repository and limited labeled examples, raise an annotation challenge for us.
ToNER: Type-oriented Named Entity Recognition with Generative Language Model
In recent years, the fine-tuned generative models have been proven more powerful than the previous tagging-based or span-based models on named entity recognition (NER) task.
Low-Resource Named Entity Recognition with Cross-Lingual, Character-Level Neural Conditional Random Fields
Low-resource named entity recognition is still an open problem in NLP.
Hybrid Multi-stage Decoding for Few-shot NER with Entity-aware Contrastive Learning
In the training process, we train and get the best entity-span detection model and the entity classification model separately on the source domain using meta-learning, where we create a contrastive learning module to enhance entity representations for entity classification.
LLMs in Biomedicine: A study on clinical Named Entity Recognition
Large Language Models (LLMs) demonstrate remarkable versatility in various NLP tasks but encounter distinct challenges in biomedicine due to medical language complexities and data scarcity.
ClinLinker: Medical Entity Linking of Clinical Concept Mentions in Spanish
This study presents ClinLinker, a novel approach employing a two-phase pipeline for medical entity linking that leverages the potential of in-domain adapted language models for biomedical text mining: initial candidate retrieval using a SapBERT-based bi-encoder and subsequent re-ranking with a cross-encoder, trained by following a contrastive-learning strategy to be tailored to medical concepts in Spanish.
Comprehensive Study on German Language Models for Clinical and Biomedical Text Understanding
Recent advances in natural language processing (NLP) can be largely attributed to the advent of pre-trained language models such as BERT and RoBERTa.
LTNER: Large Language Model Tagging for Named Entity Recognition with Contextualized Entity Marking
The use of LLMs for natural language processing has become a popular trend in the past two years, driven by their formidable capacity for context comprehension and learning, which has inspired a wave of research from academics and industry professionals.
Enhancing Software Related Information Extraction with Generative Language Models through Single-Choice Question Answering
This paper describes our participation in the Shared Task on Software Mentions Disambiguation (SOMD), with a focus on improving relation extraction in scholarly texts through Generative Language Models (GLMs) using single-choice question-answering.
How much reliable is ChatGPT's prediction on Information Extraction under Input Perturbations?
In this paper, we assess the robustness (reliability) of ChatGPT under input perturbations for one of the most fundamental tasks of Information Extraction (IE) i. e. Named Entity Recognition (NER).