Search Results for author: Alexander Fraser

Found 56 papers, 18 papers with code

Why don’t people use character-level machine translation?

no code implementations Findings (ACL) 2022 Jindřich Libovický, Helmut Schmid, Alexander Fraser

We present a literature and empirical survey that critically assesses the state of the art in character-level modeling for machine translation (MT).

Machine Translation Translation

Unsupervised Parallel Sentence Extraction from Comparable Corpora

no code implementations IWSLT (EMNLP) 2018 Viktor Hangya, Fabienne Braune, Yuliya Kalasouskaya, Alexander Fraser

We show that our approach is effective, on three language-pairs, without the use of any bilingual signal which is important because parallel sentence mining is most useful in low resource scenarios.

Sentence Word Embeddings

Adapting Entities across Languages and Cultures

no code implementations Findings (EMNLP) 2021 Denis Peskov, Viktor Hangya, Jordan Boyd-Graber, Alexander Fraser

He is associated with founding a company in the United States, so perhaps the German founder Carl Benz could stand in for Gates in those contexts.

Machine Translation Question Answering +1

Do not neglect related languages: The case of low-resource Occitan cross-lingual word embeddings

no code implementations EMNLP (MRL) 2021 Lisa Woller, Viktor Hangya, Alexander Fraser

In contrast to previous approaches which leverage independently pre-trained embeddings of languages, we (i) train CLWEs for the low-resource and a related language jointly and (ii) map them to the target language to build the final multilingual space.

Bilingual Lexicon Induction Cross-Lingual Word Embeddings +1

The LMU Munich System for the WMT 2021 Large-Scale Multilingual Machine Translation Shared Task

no code implementations WMT (EMNLP) 2021 Wen Lai, Jindřich Libovický, Alexander Fraser

This paper describes the submission of LMU Munich to the WMT 2021 multilingual machine translation task for small track #1, which studies translation between 6 languages (Croatian, Hungarian, Estonian, Serbian, Macedonian, English) in 30 directions.

Data Augmentation Knowledge Distillation +2

Improving Machine Translation of Rare and Unseen Word Senses

no code implementations WMT (EMNLP) 2021 Viktor Hangya, Qianchu Liu, Dario Stojanovski, Alexander Fraser, Anna Korhonen

The performance of NMT systems has improved drastically in the past few years but the translation of multi-sense words still poses a challenge.

Bilingual Lexicon Induction NMT +3

The LMU Munich Systems for the WMT21 Unsupervised and Very Low-Resource Translation Task

no code implementations WMT (EMNLP) 2021 Jindřich Libovický, Alexander Fraser

We present our submissions to the WMT21 shared task in Unsupervised and Very Low Resource machine translation between German and Upper Sorbian, German and Lower Sorbian, and Russian and Chuvash.

Machine Translation Translation

Don’t Forget Cheap Training Signals Before Building Unsupervised Bilingual Word Embeddings

no code implementations LREC (BUCC) 2022 Silvia Severini, Viktor Hangya, Masoud Jalili Sabet, Alexander Fraser, Hinrich Schütze

The two approaches we find most effective are: 1) using identical words as seed lexicons (which unsupervised approaches incorrectly assume are not available for orthographically distinct language pairs) and 2) combining such lexicons with pairs extracted by matching romanized versions of words with an edit distance threshold.

Cross-Lingual Transfer Word Embeddings

Cross-Lingual Transfer Learning for Hate Speech Detection

no code implementations EACL (LTEDI) 2021 Irina Bigoulaeva, Viktor Hangya, Alexander Fraser

Rather than collecting and annotating new hate speech data, we show how to use cross-lingual transfer learning to leverage already existing data from higher-resource languages.

Cross-Lingual Transfer Hate Speech Detection +2

Are BabyLMs Second Language Learners?

no code implementations28 Oct 2024 Lukas Edman, Lisa Bylinina, Faeze Ghorbanpour, Alexander Fraser

This paper describes a linguistically-motivated approach to the 2024 edition of the BabyLM Challenge (Warstadt et al. 2023).

Sentence

Hate Personified: Investigating the role of LLMs in content moderation

1 code implementation3 Oct 2024 Sarah Masud, Sahajpreet Singh, Viktor Hangya, Alexander Fraser, Tanmoy Chakraborty

For subjective tasks such as hate detection, where people perceive hate differently, the Large Language Model's (LLM) ability to represent diverse groups is unclear.

Style-Specific Neurons for Steering LLMs in Text Style Transfer

1 code implementation1 Oct 2024 Wen Lai, Viktor Hangya, Alexander Fraser

Text style transfer (TST) aims to modify the style of a text without altering its original meaning.

Diversity Style Transfer +1

CUTE: Measuring LLMs' Understanding of Their Tokens

1 code implementation23 Sep 2024 Lukas Edman, Helmut Schmid, Alexander Fraser

Large Language Models (LLMs) show remarkable performance on a wide variety of tasks.

LLMs Beyond English: Scaling the Multilingual Capability of LLMs with Cross-Lingual Feedback

no code implementations3 Jun 2024 Wen Lai, Mohsen Mesgar, Alexander Fraser

We perform multilingual instruction tuning on the constructed instruction data and further align the LLMs with human feedback using the DPO algorithm on our cross-lingual human feedback dataset.

Joint Lemmatization and Morphological Tagging with LEMMING

no code implementations EMNLP 2015 Thomas Muller, Ryan Cotterell, Alexander Fraser, Hinrich Schütze

We present LEMMING, a modular log-linear model that jointly models lemmatization and tagging and supports the integration of arbitrary global features.

Lemmatization Morphological Tagging

Labeled Morphological Segmentation with Semi-Markov Models

no code implementations CONLL 2015 Ryan Cotterell, Thomas Müller, Alexander Fraser, Hinrich Schütze

We present labeled morphological segmentation, an alternative view of morphological processing that unifies several tasks.

Segmentation TAG

Understanding Cross-Lingual Alignment -- A Survey

no code implementations9 Apr 2024 Katharina Hämmerl, Jindřich Libovický, Alexander Fraser

Cross-lingual alignment, the meaningful similarity of representations across languages in multilingual language models, has been an active field of research in recent years.

Decoder Survey

Multilingual Word Embeddings for Low-Resource Languages using Anchors and a Chain of Related Languages

no code implementations21 Nov 2023 Viktor Hangya, Silvia Severini, Radoslav Ralev, Alexander Fraser, Hinrich Schütze

In this paper, we propose to build multilingual word embeddings (MWEs) via a novel language chain-based approach, that incorporates intermediate related languages to bridge the gap between the distant source and target.

Bilingual Lexicon Induction Multilingual NLP +1

Extending Multilingual Machine Translation through Imitation Learning

no code implementations14 Nov 2023 Wen Lai, Viktor Hangya, Alexander Fraser

Despite the growing variety of languages supported by existing multilingual neural machine translation (MNMT) models, most of the world's languages are still being left behind.

Imitation Learning Machine Translation +1

On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss

1 code implementation26 May 2023 Yihong Liu, Alexandra Chronopoulou, Hinrich Schütze, Alexander Fraser

By conducting extensive experiments on different language pairs, including similar and distant, high and low-resource languages, we find that our method alleviates the copying problem, thus improving the translation performance on low-resource languages.

Machine Translation NMT +2

How to Solve Few-Shot Abusive Content Detection Using the Data We Actually Have

no code implementations23 May 2023 Viktor Hangya, Alexander Fraser

Our experiments show that using already existing datasets and only a few-shots of the target task the performance of models improve both monolingually and across languages.

Abusive Language

Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation

1 code implementation22 May 2023 Wen Lai, Alexandra Chronopoulou, Alexander Fraser

Despite advances in multilingual neural machine translation (MNMT), we argue that there are still two major challenges in this area: data imbalance and representation degeneration.

Contrastive Learning Machine Translation +1

AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models

no code implementations14 Feb 2023 Alexandra Chronopoulou, Matthew E. Peters, Alexander Fraser, Jesse Dodge

We also explore weight averaging of adapters trained on the same domain with different hyper-parameters, and show that it preserves the performance of a PLM on new domains while obtaining strong in-domain results.

Clustering Language Modelling +3

$m^4Adapter$: Multilingual Multi-Domain Adaptation for Machine Translation with a Meta-Adapter

1 code implementation21 Oct 2022 Wen Lai, Alexandra Chronopoulou, Alexander Fraser

We consider a very challenging scenario: adapting the MNMT model both to a new domain and to a new language pair at the same time.

Domain Adaptation Machine Translation +2

A Survey of Methods for Addressing Class Imbalance in Deep-Learning Based Natural Language Processing

no code implementations10 Oct 2022 Sophie Henning, William Beluch, Alexander Fraser, Annemarie Friedrich

With this survey, the first overview on class imbalance in deep-learning based NLP, we provide guidance for NLP researchers and practitioners dealing with imbalanced data.

Benchmarking Data Augmentation +1

Language-Family Adapters for Low-Resource Multilingual Neural Machine Translation

no code implementations30 Sep 2022 Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser

Training a new adapter on each language pair or training a single adapter on all language pairs without updating the pretrained model has been proposed as a parameter-efficient alternative.

Cross-Lingual Transfer Machine Translation +1

Don't Forget Cheap Training Signals Before Building Unsupervised Bilingual Word Embeddings

no code implementations31 May 2022 Silvia Severini, Viktor Hangya, Masoud Jalili Sabet, Alexander Fraser, Hinrich Schütze

The two approaches we find most effective are: 1) using identical words as seed lexicons (which unsupervised approaches incorrectly assume are not available for orthographically distinct language pairs) and 2) combining such lexicons with pairs extracted by matching romanized versions of words with an edit distance threshold.

Cross-Lingual Transfer Word Embeddings

Demonstrating CAT: Synthesizing Data-Aware Conversational Agents for Transactional Databases

no code implementations26 Mar 2022 Marius Gassen, Benjamin Hättasch, Benjamin Hilprecht, Nadja Geisler, Alexander Fraser, Carsten Binnig

However, developing a conversational agent (i. e., a chatbot-like interface) to allow end-users to interact with an application using natural language requires both immense amounts of training data and NLP expertise.

Chatbot

Modeling Target-Side Morphology in Neural Machine Translation: A Comparison of Strategies

no code implementations25 Mar 2022 Marion Weller-Di Marco, Matthias Huck, Alexander Fraser

Key challenges of rich target-side morphology in data-driven machine translation include: (1) A large amount of differently inflected word surface forms entails a larger vocabulary and thus data sparsity.

Decoder LEMMA +4

Do Multilingual Language Models Capture Differing Moral Norms?

no code implementations18 Mar 2022 Katharina Hämmerl, Björn Deiseroth, Patrick Schramowski, Jindřich Libovický, Alexander Fraser, Kristian Kersting

Massively multilingual sentence representations are trained on large corpora of uncurated data, with a very imbalanced proportion of languages included in the training.

Sentence XLM-R

Improving Both Domain Robustness and Domain Adaptability in Machine Translation

1 code implementation COLING 2022 Wen Lai, Jindřich Libovický, Alexander Fraser

First, we want to reach domain robustness, i. e., we want to reach high quality on both domains seen in the training data and unseen domains.

Domain Adaptation Machine Translation +3

Why don't people use character-level machine translation?

no code implementations15 Oct 2021 Jindřich Libovický, Helmut Schmid, Alexander Fraser

We present a literature and empirical survey that critically assesses the state of the art in character-level modeling for machine translation (MT).

Machine Translation Translation

Neural String Edit Distance

1 code implementation spnlp (ACL) 2022 Jindřich Libovický, Alexander Fraser

We propose the neural string edit distance model for string-pair matching and string transduction based on learnable string edit distance.

Classification General Classification +1

Improving the Lexical Ability of Pretrained Language Models for Unsupervised Neural Machine Translation

1 code implementation NAACL 2021 Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser

Successful methods for unsupervised neural machine translation (UNMT) employ crosslingual pretraining via self-supervision, often in the form of a masked language modeling or a sequence generation task, which requires the model to align the lexical- and high-level representations of the two languages.

Bilingual Lexicon Induction Language Modelling +2

ContraCAT: Contrastive Coreference Analytical Templates for Machine Translation

no code implementations COLING 2020 Dario Stojanovski, Benno Krojer, Denis Peskov, Alexander Fraser

Recent high scores on pronoun translation using context-aware neural machine translation have suggested that current approaches work well.

Machine Translation NMT +1

The LMU Munich System for the WMT 2020 Unsupervised Machine Translation Shared Task

1 code implementation WMT (EMNLP) 2020 Alexandra Chronopoulou, Dario Stojanovski, Viktor Hangya, Alexander Fraser

Our core unsupervised neural machine translation (UNMT) system follows the strategy of Chronopoulou et al. (2020), using a monolingual pretrained language generation model (on German) and fine-tuning it on both German and Upper Sorbian, before initializing a UNMT model, which is trained with online backtranslation.

Text Generation Translation +1

Anchor-based Bilingual Word Embeddings for Low-Resource Languages

no code implementations ACL 2021 Tobias Eder, Viktor Hangya, Alexander Fraser

For low resource languages training MWEs monolingually results in MWEs of poor quality, and thus poor bilingual word embeddings (BWEs) as well.

Bilingual Lexicon Induction Cross-Lingual Transfer +5

Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT

1 code implementation EMNLP 2020 Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser

Using a language model (LM) pretrained on two languages with large monolingual data in order to initialize an unsupervised neural machine translation (UNMT) system yields state-of-the-art results.

Language Modelling Machine Translation +2

Towards Reasonably-Sized Character-Level Transformer NMT by Finetuning Subword Systems

2 code implementations EMNLP 2020 Jindřich Libovický, Alexander Fraser

Applying the Transformer architecture on the character level usually requires very deep architectures that are difficult and slow to train.

Machine Translation NMT +2

How Language-Neutral is Multilingual BERT?

1 code implementation8 Nov 2019 Jindřich Libovický, Rudolf Rosa, Alexander Fraser

Multilingual BERT (mBERT) provides sentence representations for 104 languages, which are useful for many multi-lingual tasks.

Retrieval Sentence +3

Cannot find the paper you are looking for? You can Submit a new open access paper.