Search Results for author: Markus Freitag

Found 34 papers, 6 papers with code

Findings of the 2021 Conference on Machine Translation (WMT21)

no code implementations WMT (EMNLP) 2021 Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondřej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-Jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, Marcos Zampieri

This paper presents the results of the newstranslation task, the multilingual low-resourcetranslation for Indo-European languages, thetriangular translation task, and the automaticpost-editing task organised as part of the Con-ference on Machine Translation (WMT) 2021. In the news task, participants were asked tobuild machine translation systems for any of10 language pairs, to be evaluated on test setsconsisting mainly of news stories.

Machine Translation Translation

Results of the WMT20 Metrics Shared Task

no code implementations WMT (EMNLP) 2020 Nitika Mathur, Johnny Wei, Markus Freitag, Qingsong Ma, Ondřej Bojar

Participants were asked to score the outputs of the translation systems competing in the WMT20 News Translation Task with automatic metrics.

Translation

Findings of the WMT 2020 Shared Task on Automatic Post-Editing

no code implementations WMT (EMNLP) 2020 Rajen Chatterjee, Markus Freitag, Matteo Negri, Marco Turchi

Due to i) the different source/domain of data compared to the past (Wikipedia vs Information Technology), ii) the different quality of the initial translations to be corrected and iii) the introduction of a new language pair (English-Chinese), this year’s results are not directly comparable with last year’s round.

Automatic Post-Editing

Minimum Bayes Risk Decoding with Neural Metrics of Translation Quality

no code implementations17 Nov 2021 Markus Freitag, David Grangier, Qijun Tan, Bowen Liang

This work applies Minimum Bayes Risk (MBR) decoding to optimize diverse automated metrics of translation quality.

Machine Translation Translation

Using Machine Translation to Localize Task Oriented NLG Output

no code implementations9 Jul 2021 Scott Roy, Cliff Brunk, Kyu-Young Kim, Justin Zhao, Markus Freitag, Mihir Kale, Gagan Bansal, Sidharth Mudgal, Chris Varano

One of the challenges in a task oriented natural language application like the Google Assistant, Siri, or Alexa is to localize the output to many languages.

Domain Adaptation Machine Translation +1

What Can Unsupervised Machine Translation Contribute to High-Resource Language Pairs?

no code implementations30 Jun 2021 Kelly Marchisio, Markus Freitag, David Grangier

Whereas existing literature on unsupervised machine translation (MT) focuses on exploiting unsupervised techniques for low-resource language pairs where bilingual training data is scare or unavailable, we investigate whether unsupervised MT can also improve translation quality of high-resource language pairs where sufficient bitext does exist.

Translation Unsupervised Machine Translation

Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation

3 code implementations29 Apr 2021 Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, Wolfgang Macherey

Human evaluation of modern high-quality machine translation systems is a difficult problem, and there is increasing evidence that inadequate evaluation procedures can lead to erroneous conclusions.

Machine Translation Translation

Assessing Reference-Free Peer Evaluation for Machine Translation

no code implementations NAACL 2021 Sweta Agrawal, George Foster, Markus Freitag, Colin Cherry

Reference-free evaluation has the potential to make machine translation evaluation substantially more scalable, allowing us to pivot easily to new languages or domains.

Machine Translation Translation

Complete Multilingual Neural Machine Translation

no code implementations WMT (EMNLP) 2020 Markus Freitag, Orhan Firat

We reintroduce this direct parallel data from multi-way aligned corpora between all source and target languages.

Machine Translation Transfer Learning +1

Human-Paraphrased References Improve Neural Machine Translation

1 code implementation WMT (EMNLP) 2020 Markus Freitag, George Foster, David Grangier, Colin Cherry

When used in place of original references, the paraphrased versions produce metric scores that correlate better with human judgment.

Machine Translation Translation

KoBE: Knowledge-Based Machine Translation Evaluation

1 code implementation Findings of the Association for Computational Linguistics 2020 Zorik Gekhman, Roee Aharoni, Genady Beryozkin, Markus Freitag, Wolfgang Macherey

Our approach achieves the highest correlation with human judgements on 9 out of the 18 language pairs from the WMT19 benchmark for evaluation without references, which is the largest number of wins for a single evaluation method on this task.

Machine Translation Translation

Translationese as a Language in ``Multilingual'' NMT

no code implementations ACL 2020 Parker Riley, Isaac Caswell, Markus Freitag, David Grangier

Machine translation has an undesirable propensity to produce {``}translationese{''} artifacts, which can lead to higher BLEU scores while being liked less by human raters.

Machine Translation TAG +1

BLEU might be Guilty but References are not Innocent

2 code implementations EMNLP 2020 Markus Freitag, David Grangier, Isaac Caswell

The quality of automatic metrics for machine translation has been increasingly called into question, especially for high-quality systems.

Machine Translation Translation

Translationese as a Language in "Multilingual" NMT

no code implementations10 Nov 2019 Parker Riley, Isaac Caswell, Markus Freitag, David Grangier

Machine translation has an undesirable propensity to produce "translationese" artifacts, which can lead to higher BLEU scores while being liked less by human raters.

Machine Translation TAG +1

APE at Scale and its Implications on MT Evaluation Biases

no code implementations WS 2019 Markus Freitag, Isaac Caswell, Scott Roy

In this work, we train an Automatic Post-Editing (APE) model and use it to reveal biases in standard Machine Translation (MT) evaluation procedures.

Automatic Post-Editing Translation

Unsupervised Natural Language Generation with Denoising Autoencoders

1 code implementation EMNLP 2018 Markus Freitag, Scott Roy

Generating text from structured data is important for various tasks such as question answering and dialog systems.

Denoising Question Answering +1

Attention-based Vocabulary Selection for NMT Decoding

no code implementations12 Jun 2017 Baskaran Sankaran, Markus Freitag, Yaser Al-Onaizan

Usually, the candidate lists are a combination of external word-to-word aligner, phrase table entries or most frequent words.

Machine Translation Translation

Local System Voting Feature for Machine Translation System Combination

no code implementations WS 2015 Markus Freitag, Jan-Thorsten Peter, Stephan Peitz, Minwei Feng, Hermann Ney

In this paper, we enhance the traditional confusion network system combination approach with an additional model trained by a neural network.

Machine Translation Translation

Ensemble Distillation for Neural Machine Translation

no code implementations6 Feb 2017 Markus Freitag, Yaser Al-Onaizan, Baskaran Sankaran

Knowledge distillation describes a method for training a student network to perform better by learning from a stronger teacher network.

Knowledge Distillation Machine Translation +1

Beam Search Strategies for Neural Machine Translation

1 code implementation WS 2017 Markus Freitag, Yaser Al-Onaizan

In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores.

Machine Translation Translation

Fast Domain Adaptation for Neural Machine Translation

no code implementations20 Dec 2016 Markus Freitag, Yaser Al-Onaizan

The basic concept in NMT is to train a large Neural Network that maximizes the translation performance on a given parallel corpus.

Domain Adaptation Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.