Cross-Lingual Transfer
289 papers with code • 1 benchmarks • 16 datasets
Cross-lingual transfer refers to transfer learning using data and models available for one language for which ample such resources are available (e.g., English) to solve tasks in another, commonly more low-resource, language.
Libraries
Use these libraries to find Cross-Lingual Transfer models and implementationsLatest papers with no code
Neuron Specialization: Leveraging intrinsic task modularity for multilingual machine translation
Training a unified multilingual model promotes knowledge transfer but inevitably introduces negative interference.
Adapting Mental Health Prediction Tasks for Cross-lingual Learning via Meta-Training and In-context Learning with Large Language Model
The results show that our meta-trained model performs significantly better than standard fine-tuning methods, outperforming the baseline fine-tuning in macro F1 score with 18\% and 0. 8\% over XLM-R and mBERT.
Event Extraction in Basque: Typologically motivated Cross-Lingual Transfer-Learning Analysis
To perform the experiments we introduce EusIE, an event extraction dataset for Basque, which follows the Multilingual Event Extraction dataset (MEE).
MaiNLP at SemEval-2024 Task 1: Analyzing Source Language Selection in Cross-Lingual Textual Relatedness
This paper presents our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness (STR), on Track C: Cross-lingual.
Bailong: Bilingual Transfer Learning based on QLoRA and Zip-tie Embedding
However, the majority of existing open-source LLMs are pre-trained primarily on English data and little part of other languages.
A Systematic Analysis of Subwords and Cross-Lingual Transfer in Multilingual Translation
Multilingual modelling can improve machine translation for low-resource languages, partly through shared subword representations.
Can Machine Translation Bridge Multilingual Pretraining and Cross-lingual Transfer Learning?
We furthermore provide evidence through similarity measures and investigation of parameters that this lack of positive influence is due to output separability -- which we argue is of use for machine translation but detrimental elsewhere.
Towards Knowledge-Grounded Natural Language Understanding and Generation
This thesis investigates how natural language understanding and generation with transformer models can benefit from grounding the models with knowledge representations and addresses the following key research questions: (i) Can knowledge of entities extend its benefits beyond entity-centric tasks, such as entity linking?
Cross-Lingual Transfer for Natural Language Inference via Multilingual Prompt Translator
To efficiently transfer soft prompt, we propose a novel framework, Multilingual Prompt Translator (MPT), where a multilingual prompt translator is introduced to properly process crucial knowledge embedded in prompt by changing language knowledge while retaining task knowledge.
Few-Shot Cross-Lingual Transfer for Prompting Large Language Models in Low-Resource Languages
We find that the results are task and language dependent but find that the prompting method is the best on average across all tasks and languages.