NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data

Large Language Models (LLMs) have shown impressive abilities in data annotation, opening the way for new approaches to solve classic NLP problems. In this paper, we show how to use LLMs to create NuNER, a compact language representation model specialized in the Named Entity Recognition (NER) task. NuNER can be fine-tuned to solve downstream NER problems in a data-efficient way, outperforming similar-sized foundation models in the few-shot regime and competing with much larger LLMs. We find that the size and entity-type diversity of the pre-training dataset are key to achieving good performance. We view NuNER as a member of the broader family of task-specific foundation models, recently unlocked by LLMs.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Zero-shot Named Entity Recognition (NER) Broad Twitter Corpus NuNerZero Span Entity F1 60.2 # 1
Zero-shot Named Entity Recognition (NER) CrossNER NuNERZero span AI 61.7 # 1
Literature 64.9 # 1
Music 69.9 # 1
Politics 71.7 # 1
Science 65.4 # 1
Few-shot NER Few-NERD (INTER) NuNER 5 way 1~2 shot 67.37±0.31 # 3
5 way 5~10 shot 73.50±0.09 # 3
10 way 1~2 shot 66.54±0.40 # 2
10 way 5~10 shot 71.04±0.14 # 2
Few-shot NER Few-NERD (INTRA) NuNER 5 way 1~2 shot 62.48±0.28 # 2
5 way 5~10 shot 69.16±0.28 # 2
10 way 1~2 shot 57.63±0.38 # 1
10 way 5~10 shot 62.99±0.27 # 2
Named Entity Recognition (NER) Few-NERD (SUP) NuNER Precision 67.8 # 3
Recall 71.1 # 1
F1-Measure 69.4 # 3
Zero-shot Named Entity Recognition (NER) HarveyNER NuNER Zero Span Entity F1 24.9 # 2
Named Entity Recognition (NER) NCBI-disease NuNER Zero Span F1 61.1 # 26
Named Entity Recognition (NER) Ontonotes v5 (English) NuNER F1 89.1 # 16
Precision 87.8 # 3
Recall 90.5 # 2

Methods


No methods listed for this paper. Add relevant methods here