no code implementations • EAMT 2020 • Maciej Modrzejewski, Miriam Exel, Bianka Buschbeck, Thanh-Le Ha, Alexander Waibel
The correct translation of named entities (NEs) still poses a challenge for conventional neural machine translation (NMT) systems.
no code implementations • EAMT 2020 • Miriam Exel, Bianka Buschbeck, Lauritz Brandt, Simona Doneva
This paper examines approaches to bias a neural machine translation model to adhere to terminology constraints in an industrial setup.
no code implementations • EAMT 2022 • Bianka Buschbeck, Jennifer Mell, Miriam Exel, Matthias Huck
This paper addresses the automatic translation of conversational content in a business context, for example support chat dialogues.
no code implementations • 12 Feb 2025 • Sai Koneru, Matthias Huck, Miriam Exel, Jan Niehues
An emerging research direction in NMT involves the use of Quality Estimation (QE) models, which have demonstrated high correlations with human judgment and can enhance translations through Quality-Aware Decoding.
no code implementations • 3 Oct 2024 • Nathaniel Berger, Stefan Riezler, Miriam Exel, Matthias Huck
We attempt to use these implicit preferences for PO and show that it helps the model move towards post-edit-like hypotheses and away from machine translation-like hypotheses.
no code implementations • 21 Aug 2024 • Sai Koneru, Matthias Huck, Miriam Exel, Jan Niehues
However, real-world tasks, like multimodal translation, often require a combination of these strengths, such as handling both translation and image processing.
no code implementations • 4 Jun 2024 • Nathaniel Berger, Stefan Riezler, Miriam Exel, Matthias Huck
While large language models (LLMs) pre-trained on massive amounts of unpaired language data have reached the state-of-the-art in machine translation (MT) of general domain texts, post-editing (PE) is still required to correct errors and to enhance term translation quality in specialized domains.
no code implementations • 23 Oct 2023 • Sai Koneru, Miriam Exel, Matthias Huck, Jan Niehues
Building on the LLM's exceptional ability to process and generate lengthy sequences, we also propose extending our approach to document-level translation.
no code implementations • 17 Jul 2023 • Nathaniel Berger, Miriam Exel, Matthias Huck, Stefan Riezler
Supervised learning in Neural Machine Translation (NMT) typically follows a teacher forcing paradigm where reference tokens constitute the conditioning context in the model's prediction, instead of its own previous predictions.
1 code implementation • AACL (WAT) 2020 • Bianka Buschbeck, Miriam Exel
This paper accompanies the software documentation data set for machine translation, a parallel evaluation data set of data originating from the SAP Help Portal, that we released to the machine translation community for research purposes.