Low Resource Named Entity Recognition
14 papers with code • 3 benchmarks • 4 datasets
Low resource named entity recognition is the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve named entity recognition tasks in another, commonly more low-resource, language.
These leaderboards are used to track progress in Low Resource Named Entity Recognition
Most implemented papers
Towards Robust Named Entity Recognition for Historic German
Recent advances in language modeling using deep neural networks have shown that these models learn representations, that vary with the network depth from morphology to semantic relationships like co-reference.
Massively Multilingual Transfer for NER
In cross-lingual transfer, NLP models over one or more source languages are applied to a low-resource target language.
Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy Labels
In low-resource settings, the performance of supervised labeling models can be improved with automatically annotated or distantly supervised data, which is cheap to create but often noisy.
Zero-Resource Cross-Lingual Named Entity Recognition
Recently, neural methods have achieved state-of-the-art (SOTA) results in Named Entity Recognition (NER) tasks for many languages without the need for manually crafted features.
Soft Gazetteers for Low-Resource Named Entity Recognition
However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages.
A Robust and Domain-Adaptive Approach for Low-Resource Named Entity Recognition
Recently, it has attracted much attention to build reliable named entity recognition (NER) systems using limited annotated data.
ANEA: Distant Supervision for Low-Resource Named Entity Recognition
Distant supervision allows obtaining labeled training corpora for low-resource settings where only limited hand-annotated data exists.
Memorisation versus Generalisation in Pre-trained Language Models
State-of-the-art pre-trained language models have been shown to memorise facts and perform well with limited amounts of training data.
Data Augmentation for Low-Resource Named Entity Recognition Using Backtranslation
The state of art natural language processing systems relies on sizable training datasets to achieve high performance.
Low-Resource Named Entity Recognition Based on Multi-hop Dependency Trigger
This paper presents a simple and effective approach in low-resource named entity recognition (NER) based on multi-hop dependency trigger.