Search Results for author: Alexandra Chronopoulou

Found 16 papers, 12 papers with code

Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization

no code implementations15 Nov 2023 Alexandra Chronopoulou, Jonas Pfeiffer, Joshua Maynez, Xinyi Wang, Sebastian Ruder, Priyanka Agrawal

Parameter-efficient fine-tuning (PEFT) using labeled task data can significantly improve the performance of large language models (LLMs) on the downstream task.

Text Generation Zero-Shot Cross-Lingual Transfer

On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss

1 code implementation26 May 2023 Yihong Liu, Alexandra Chronopoulou, Hinrich Schütze, Alexander Fraser

By conducting extensive experiments on different language pairs, including similar and distant, high and low-resource languages, we find that our method alleviates the copying problem, thus improving the translation performance on low-resource languages.

Machine Translation NMT +2

Improving Isochronous Machine Translation with Target Factors and Auxiliary Counters

no code implementations22 May 2023 Proyag Pal, Brian Thompson, Yogesh Virkar, Prashant Mathur, Alexandra Chronopoulou, Marcello Federico

To translate speech for automatic dubbing, machine translation needs to be isochronous, i. e. translated speech needs to be aligned with the source in terms of speech durations.

Machine Translation Translation

Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation

1 code implementation22 May 2023 Wen Lai, Alexandra Chronopoulou, Alexander Fraser

Despite advances in multilingual neural machine translation (MNMT), we argue that there are still two major challenges in this area: data imbalance and representation degeneration.

Contrastive Learning Machine Translation +1

AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models

no code implementations14 Feb 2023 Alexandra Chronopoulou, Matthew E. Peters, Alexander Fraser, Jesse Dodge

We also explore weight averaging of adapters trained on the same domain with different hyper-parameters, and show that it preserves the performance of a PLM on new domains while obtaining strong in-domain results.

Clustering Language Modelling +3

$m^4Adapter$: Multilingual Multi-Domain Adaptation for Machine Translation with a Meta-Adapter

1 code implementation21 Oct 2022 Wen Lai, Alexandra Chronopoulou, Alexander Fraser

We consider a very challenging scenario: adapting the MNMT model both to a new domain and to a new language pair at the same time.

Domain Adaptation Machine Translation +2

Language-Family Adapters for Low-Resource Multilingual Neural Machine Translation

no code implementations30 Sep 2022 Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser

Training a new adapter on each language pair or training a single adapter on all language pairs without updating the pretrained model has been proposed as a parameter-efficient alternative.

Cross-Lingual Transfer Machine Translation +1

Efficient Hierarchical Domain Adaptation for Pretrained Language Models

1 code implementation NAACL 2022 Alexandra Chronopoulou, Matthew E. Peters, Jesse Dodge

The remarkable success of large language models has been driven by dense models trained on massive unlabeled, unstructured corpora.

Domain Adaptation Language Modelling

Improving the Lexical Ability of Pretrained Language Models for Unsupervised Neural Machine Translation

1 code implementation NAACL 2021 Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser

Successful methods for unsupervised neural machine translation (UNMT) employ crosslingual pretraining via self-supervision, often in the form of a masked language modeling or a sequence generation task, which requires the model to align the lexical- and high-level representations of the two languages.

Bilingual Lexicon Induction Language Modelling +2

The LMU Munich System for the WMT 2020 Unsupervised Machine Translation Shared Task

1 code implementation WMT (EMNLP) 2020 Alexandra Chronopoulou, Dario Stojanovski, Viktor Hangya, Alexander Fraser

Our core unsupervised neural machine translation (UNMT) system follows the strategy of Chronopoulou et al. (2020), using a monolingual pretrained language generation model (on German) and fine-tuning it on both German and Upper Sorbian, before initializing a UNMT model, which is trained with online backtranslation.

Text Generation Translation +1

Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT

1 code implementation EMNLP 2020 Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser

Using a language model (LM) pretrained on two languages with large monolingual data in order to initialize an unsupervised neural machine translation (UNMT) system yields state-of-the-art results.

Language Modelling Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.