Search Results for author: Christian Federmann

Found 39 papers, 3 papers with code

Machine Translation Human Evaluation: an investigation of evaluation based on Post-Editing and its relation with Direct Assessment

no code implementations IWSLT (EMNLP) 2018 Luisa Bentivogli, Mauro Cettolo, Marcello Federico, Christian Federmann

In this paper we present an analysis of the two most prominent methodologies used for the human evaluation of MT quality, namely evaluation based on Post-Editing (PE) and evaluation based on Direct Assessment (DA).

Machine Translation

Microsoft Speech Language Translation (MSLT) Corpus: The IWSLT 2016 release for English, French and German

no code implementations IWSLT 2016 Christian Federmann, William D. Lewis

We describe the Microsoft Speech Language Translation (MSLT) corpus, which was created in order to evaluate end-to-end conversational speech translation quality.

Machine Translation speech-recognition +2

Findings of the 2021 Conference on Machine Translation (WMT21)

no code implementations WMT (EMNLP) 2021 Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondřej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-Jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, Marcos Zampieri

This paper presents the results of the newstranslation task, the multilingual low-resourcetranslation for Indo-European languages, thetriangular translation task, and the automaticpost-editing task organised as part of the Con-ference on Machine Translation (WMT) 2021. In the news task, participants were asked tobuild machine translation systems for any of10 language pairs, to be evaluated on test setsconsisting mainly of news stories.

Machine Translation Translation

Domain Adaptation of Document-Level NMT in IWSLT19

no code implementations EMNLP (IWSLT) 2019 Martin Popel, Christian Federmann

We describe our four NMT systems submitted to the IWSLT19 shared task in English→Czech text-to-text translation of TED talks.

Domain Adaptation NMT +1

Findings of the IWSLT 2022 Evaluation Campaign

no code implementations IWSLT (ACL) 2022 Antonios Anastasopoulos, Loïc Barrault, Luisa Bentivogli, Marcely Zanon Boito, Ondřej Bojar, Roldano Cattoni, Anna Currey, Georgiana Dinu, Kevin Duh, Maha Elbayad, Clara Emmanuel, Yannick Estève, Marcello Federico, Christian Federmann, Souhir Gahbiche, Hongyu Gong, Roman Grundkiewicz, Barry Haddow, Benjamin Hsu, Dávid Javorský, Vĕra Kloudová, Surafel Lakew, Xutai Ma, Prashant Mathur, Paul McNamee, Kenton Murray, Maria Nǎdejde, Satoshi Nakamura, Matteo Negri, Jan Niehues, Xing Niu, John Ortega, Juan Pino, Elizabeth Salesky, Jiatong Shi, Matthias Sperber, Sebastian Stüker, Katsuhito Sudoh, Marco Turchi, Yogesh Virkar, Alexander Waibel, Changhan Wang, Shinji Watanabe

The evaluation campaign of the 19th International Conference on Spoken Language Translation featured eight shared tasks: (i) Simultaneous speech translation, (ii) Offline speech translation, (iii) Speech to speech translation, (iv) Low-resource speech translation, (v) Multilingual speech translation, (vi) Dialect speech translation, (vii) Formality control for speech translation, (viii) Isometric speech translation.

Speech-to-Speech Translation Translation

Large Language Models Are State-of-the-Art Evaluators of Translation Quality

3 code implementations28 Feb 2023 Tom Kocmi, Christian Federmann

We describe GEMBA, a GPT-based metric for assessment of translation quality, which works both with a reference translation and without.


Searching for a higher power in the human evaluation of MT

no code implementations20 Oct 2022 Johnny Tian-Zheng Wei, Tom Kocmi, Christian Federmann

In MT evaluation, pairwise comparisons are conducted to identify the better system.

On User Interfaces for Large-Scale Document-Level Human Evaluation of Machine Translation Outputs

no code implementations EACL (HumEval) 2021 Roman Grundkiewicz, Marcin Junczys-Dowmunt, Christian Federmann, Tom Kocmi

Recent studies emphasize the need of document context in human evaluation of machine translations, but little research has been done on the impact of user interfaces on annotator productivity and the reliability of assessments.

Machine Translation Translation


no code implementations WS 2020 Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ond{\v{r}}ej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Xing Shi, Sebastian St{\"u}ker, Marco Turchi, Alex Waibel, er, Changhan Wang

The evaluation campaign of the International Conference on Spoken Language Translation (IWSLT 2020) featured this year six challenge tracks: (i) Simultaneous speech translation, (ii) Video speech translation, (iii) Offline speech translation, (iv) Conversational speech translation, (v) Open domain translation, and (vi) Non-native speech translation.


Multilingual Whispers: Generating Paraphrases with Translation

no code implementations WS 2019 Christian Federmann, Oussama Elachqar, Chris Quirk

Naturally occurring paraphrase data, such as multiple news stories about the same event, is a useful but rare resource.

Machine Translation Translation

Findings of the WMT 2019 Shared Tasks on Quality Estimation

no code implementations WS 2019 Erick Fonseca, Lisa Yankovskaya, Andr{\'e} F. T. Martins, Mark Fishel, Christian Federmann

We report the results of the WMT19 shared task on Quality Estimation, i. e. the task of predicting the quality of the output of machine translation systems given just the source text and the hypothesis translations.

Machine Translation Translation

Appraise Evaluation Framework for Machine Translation

no code implementations COLING 2018 Christian Federmann

We present Appraise, an open-source framework for crowd-based annotation tasks, notably for evaluation of machine translation output.

Machine Translation Translation

A Richly Annotated, Multilingual Parallel Corpus for Hybrid Machine Translation

no code implementations LREC 2012 Eleftherios Avramidis, Marta R. Costa-juss{\`a}, Christian Federmann, Josef van Genabith, Maite Melero, Pavel Pecina

This corpus aims to serve as a basic resource for further research on whether hybrid machine translation algorithms and system combination techniques can benefit from additional (linguistically motivated, decoding, and runtime) information provided by the different systems involved.

Machine Translation Translation

The ML4HMT Workshop on Optimising the Division of Labour in Hybrid Machine Translation

no code implementations LREC 2012 Christian Federmann, Eleftherios Avramidis, Marta R. Costa-juss{\`a}, Josef van Genabith, Maite Melero, Pavel Pecina

We describe the “Shared Task on Applying Machine Learning Techniques to Optimise the Division of Labour in Hybrid Machine Translation” (ML4HMT) which aims to foster research on improved system combination approaches for machine translation (MT).

Language Modelling Machine Translation +1

META-SHARE v2: An Open Network of Repositories for Language Resources including Data and Tools

no code implementations LREC 2012 Christian Federmann, Ioanna Giannopoulou, Christian Girardi, Olivier Hamon, Dimitris Mavroeidis, Salvatore Minutoli, Marc Schr{\"o}der

We explain the underlying motivation for such a distributed repository for metadata storage and give a detailed overview on the META-SHARE application and its various components.

Cannot find the paper you are looking for? You can Submit a new open access paper.