Frustratingly Easy Label Projection for Cross-lingual Transfer

28 Nov 2022  ยท  Yang Chen, Chao Jiang, Alan Ritter, Wei Xu ยท

Translating training data into many languages has emerged as a practical solution for improving cross-lingual transfer. For tasks that involve span-level annotations, such as information extraction or question answering, an additional label projection step is required to map annotated spans onto the translated texts. Recently, a few efforts have utilized a simple mark-then-translate method to jointly perform translation and projection by inserting special markers around the labeled spans in the original sentence. However, as far as we are aware, no empirical analysis has been conducted on how this approach compares to traditional annotation projection based on word alignment. In this paper, we present an extensive empirical study across 57 languages and three tasks (QA, NER, and Event Extraction) to evaluate the effectiveness and limitations of both methods, filling an important gap in the literature. Experimental results show that our optimized version of mark-then-translate, which we call EasyProject, is easily applied to many languages and works surprisingly well, outperforming the more complex word alignment-based methods. We analyze several key factors that affect the end-task performance, and show EasyProject works well because it can accurately preserve label span boundaries after translation. We will publicly release all our code and data.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Lingual NER MasakhaNER2.0 EasyProject Bambara 45.8 # 1
Ewe 78.5 # 1
Fon 61.4 # 1
Hausa 72.2 # 2
Igbo 65.6 # 2
Kinyarwanda 71.0 # 1
Luganda 76.7 # 1
Luo 50.2 # 1
Mossi 53.1 # 1
Chichewa 75.3 # 2
chiShona 55.9 # 2
Kiswahili 83.6 # 2
Setswana 74.0 # 1
Akan/Twi 65.3 # 1
Wolof 58.9 # 1
isiXhosa 71.1 # 2
Yoruba 36.8 # 2
isiZulu 73.0 # 1

Methods


No methods listed for this paper. Add relevant methods here