English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too

Intermediate-task training---fine-tuning a pretrained model on an intermediate task before fine-tuning again on the target task---often improves model performance substantially on language understanding tasks in monolingual English settings. We investigate whether English intermediate-task training is still helpful on non-English target tasks. Using nine intermediate language-understanding tasks, we evaluate intermediate-task transfer in a zero-shot cross-lingual setting on the XTREME benchmark. We see large improvements from intermediate training on the BUCC and Tatoeba sentence retrieval tasks and moderate improvements on question-answering target tasks. MNLI, SQuAD and HellaSwag achieve the best overall results as intermediate tasks, while multi-task intermediate offers small additional improvements. Using our best intermediate-task models for each target task, we obtain a 5.4 point improvement over XLM-R Large on the XTREME benchmark, setting the state of the art as of June 2020. We also investigate continuing multilingual MLM during intermediate-task training and using machine-translated intermediate-task data, but neither consistently outperforms simply performing English intermediate-task training.

PDF Abstract Asian Chapter 2020 PDF Asian Chapter 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Zero-Shot Cross-Lingual Transfer XTREME X-STILTs Sentence-pair Classification 83.9 # 22
Structured Prediction 69.4 # 21
Question Answering 67.2 # 21
Sentence Retrieval 76.5 # 20
Avg 73.5 # 20

Methods