12 papers with code • 3 benchmarks • 3 datasets
In this paper, we present Few-NERD, a large-scale human-annotated few-shot NER dataset with a hierarchy of 8 coarse-grained and 66 fine-grained entity types.
We present a simple few-shot named entity recognition (NER) system based on nearest neighbor learning and structured inference.
To address the issue, we propose a template-based method for NER, treating NER as a language model ranking problem in a sequence-to-sequence framework, where original sentences and statement templates filled by candidate named entity span are regarded as the source sequence and the target sequence, respectively.
Few-shot Named Entity Recognition (NER) exploits only a handful of annotations to identify and classify named entity mentions.
The results show: auto-regressive language models as meta-learners can perform NET and NER fairly well especially for regular or seen names; name irregularity when often present for a certain entity type can become an effective exploitable cue; names with words foreign to the model have the most negative impact on results; the model seems to rely more on name than context cues in few-shot NER.
Few-Shot Sequence Labeling (FSSL) is a canonical paradigm for the tagging models, e. g., named entity recognition and slot filling, to generalize on an emerging, resource-scarce domain.
Surprisingly, on NCBI-disease, our model achieves 75. 5 F1 score and even outperforms the previous best weakly supervised model by 4. 1 F1 score, which utilizes a rich in-domain dictionary provided by domain experts.
In this paper, we propose a self-describing mechanism for few-shot NER, which can effectively leverage illustrative instances and precisely transfer knowledge from external resources by describing both entity types and mentions using a universal concept set.