NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data

Large Language Models (LLMs) have shown impressive abilities in data annotation, opening the way for new approaches to solve classic NLP problems. In this paper, we show how to use LLMs to create NuNER, a compact language representation model specialized in the Named Entity Recognition (NER) task. NuNER can be fine-tuned to solve downstream NER problems in a data-efficient way, outperforming similar-sized foundation models in the few-shot regime and competing with much larger LLMs. We find that the size and entity-type diversity of the pre-training dataset are key to achieving good performance. We view NuNER as a member of the broader family of task-specific foundation models, recently unlocked by LLMs.

PDF Abstract

Datasets


Introduced in the Paper:

NuNER

Used in the Paper:

C4 Few-NERD

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Few-shot NER Few-NERD (INTER) NuNER 5 way 1~2 shot 67.37±0.31 # 2
5 way 5~10 shot 73.50±0.09 # 2
10 way 1~2 shot 66.54±0.40 # 2
10 way 5~10 shot 71.04±0.14 # 2
Few-shot NER Few-NERD (INTRA) NuNER 5 way 1~2 shot 62.48±0.28 # 1
5 way 5~10 shot 69.16±0.28 # 1
10 way 1~2 shot 57.63±0.38 # 1
10 way 5~10 shot 62.99±0.27 # 2

Methods


No methods listed for this paper. Add relevant methods here