no code implementations • EMNLP 2021 • Nathaniel Berger, Stefan Riezler, Sebastian Ebert, Artem Sokolov
Recently more attention has been given to adversarial attacks on neural networks for natural language processing (NLP).
no code implementations • 3 Oct 2024 • Nathaniel Berger, Stefan Riezler, Miriam Exel, Matthias Huck
We attempt to use these implicit preferences for PO and show that it helps the model move towards post-edit-like hypotheses and away from machine translation-like hypotheses.
no code implementations • 4 Jun 2024 • Nathaniel Berger, Stefan Riezler, Miriam Exel, Matthias Huck
While large language models (LLMs) pre-trained on massive amounts of unpaired language data have reached the state-of-the-art in machine translation (MT) of general domain texts, post-editing (PE) is still required to correct errors and to enhance term translation quality in specialized domains.
no code implementations • 17 Jul 2023 • Nathaniel Berger, Miriam Exel, Matthias Huck, Stefan Riezler
Supervised learning in Neural Machine Translation (NMT) typically follows a teacher forcing paradigm where reference tokens constitute the conditioning context in the model's prediction, instead of its own previous predictions.
no code implementations • 16 Sep 2021 • Nathaniel Berger, Stefan Riezler, Artem Sokolov, Sebastian Ebert
Recently more attention has been given to adversarial attacks on neural networks for natural language processing (NLP).
1 code implementation • 2 Jun 2020 • Mayumi Ohta, Nathaniel Berger, Artem Sokolov, Stefan Riezler
Interest in stochastic zeroth-order (SZO) methods has recently been revived in black-box optimization scenarios such as adversarial black-box attacks to deep neural networks.
1 code implementation • EAMT 2020 • Julia Kreutzer, Nathaniel Berger, Stefan Riezler
Sequence-to-sequence learning involves a trade-off between signal strength and annotation cost of training data.