WikiNEuRal: Combined Neural and Knowledge-based Silver Data Creation for Multilingual NER

Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.

PDF Abstract

Datasets


Introduced in the Paper:

WikiNEuRal

Used in the Paper:

CoNLL-2003 CoNLL 2002
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Named Entity Recognition WikiNEuRal Dutch BERT+Bi-LSTM+CRF Span-Level Macro F1 94.0 # 1
Named Entity Recognition WikiNEuRal English BERT+Bi-LSTM+CRF Span-Level Macro F1 95.4 # 1
Named Entity Recognition WikiNEuRal French BERT+Bi-LSTM+CRF Span-Level Macro F1 94.0 # 1
Named Entity Recognition WikiNEuRal German BERT+Bi-LSTM+CRF Span-Level Macro F1 94.0 # 1
Named Entity Recognition WikiNEuRal Italian BERT+Bi-LSTM+CRF Span-Level Macro F1 94.6 # 1
Named Entity Recognition WikiNEuRal Polish BERT+Bi-LSTM+CRF Span-Level Macro F1 94.0 # 1
Named Entity Recognition WikiNEuRal Portuguese BERT+Bi-LSTM+CRF Span-Level Macro F1 94.0 # 1
Named Entity Recognition WikiNEuRal Russian BERT+Bi-LSTM+CRF Span-Level Macro F1 94.0 # 1
Named Entity Recognition WikiNEuRal Spanish BERT+Bi-LSTM+CRF Span-Level Macro F1 94.0 # 1

Methods


No methods listed for this paper. Add relevant methods here