Enhanced Meta-Learning for Cross-lingual Named Entity Recognition with Minimal Resources

14 Nov 2019  ยท  Qianhui Wu, Zijia Lin, Guoxin Wang, Hui Chen, Bรถrje F. Karlsson, Biqing Huang, Chin-Yew Lin ยท

For languages with no annotated resources, transferring knowledge from rich-resource languages is an effective solution for named entity recognition (NER). While all existing methods directly transfer from source-learned model to a target language, in this paper, we propose to fine-tune the learned model with a few similar examples given a test case, which could benefit the prediction by leveraging the structural and semantic information conveyed in such similar examples. To this end, we present a meta-learning algorithm to find a good model parameter initialization that could fast adapt to the given test case and propose to construct multiple pseudo-NER tasks for meta-training by computing sentence similarities. To further improve the model's generalization ability across different languages, we introduce a masking scheme and augment the loss function with an additional maximum term during meta-training. We conduct extensive experiments on cross-lingual named entity recognition with minimal resources over five target languages. The results show that our approach significantly outperforms existing state-of-the-art methods across the board.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Lingual NER CoNLL Dutch Base Model F1 79.57 # 8
Cross-Lingual NER CoNLL Dutch Meta-Cross F1 80.44 # 6
Cross-Lingual NER CoNLL German Base Model F1 70.79 # 8
Cross-Lingual NER CoNLL German Meta-Cross F1 73.16 # 6
Cross-Lingual NER CoNLL Spanish Meta-Cross F1 76.75 # 6
Cross-Lingual NER CoNLL Spanish Base Model F1 74.59 # 9
Cross-Lingual NER Europeana French Base Model F1 50.89 # 2
Cross-Lingual NER Europeana French Meta-Cross F1 55.3 # 1
Cross-Lingual NER MSRA Base Model F1 76.42 # 2
Cross-Lingual NER MSRA Meta-Cross F1 77.89 # 1

Methods