Search Results for author: Nathaniel Berger

Found 7 papers, 2 papers with code

Post-edits Are Preferences Too

no code implementations3 Oct 2024 Nathaniel Berger, Stefan Riezler, Miriam Exel, Matthias Huck

We attempt to use these implicit preferences for PO and show that it helps the model move towards post-edit-like hypotheses and away from machine translation-like hypotheses.

Machine Translation Translation

Prompting Large Language Models with Human Error Markings for Self-Correcting Machine Translation

no code implementations4 Jun 2024 Nathaniel Berger, Stefan Riezler, Miriam Exel, Matthias Huck

While large language models (LLMs) pre-trained on massive amounts of unpaired language data have reached the state-of-the-art in machine translation (MT) of general domain texts, post-editing (PE) is still required to correct errors and to enhance term translation quality in specialized domains.

Machine Translation Translation

Enhancing Supervised Learning with Contrastive Markings in Neural Machine Translation Training

no code implementations17 Jul 2023 Nathaniel Berger, Miriam Exel, Matthias Huck, Stefan Riezler

Supervised learning in Neural Machine Translation (NMT) typically follows a teacher forcing paradigm where reference tokens constitute the conditioning context in the model's prediction, instead of its own previous predictions.

Machine Translation NMT +1

Don't Search for a Search Method -- Simple Heuristics Suffice for Adversarial Text Attacks

no code implementations16 Sep 2021 Nathaniel Berger, Stefan Riezler, Artem Sokolov, Sebastian Ebert

Recently more attention has been given to adversarial attacks on neural networks for natural language processing (NLP).

Adversarial Text

Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order Optimization

1 code implementation2 Jun 2020 Mayumi Ohta, Nathaniel Berger, Artem Sokolov, Stefan Riezler

Interest in stochastic zeroth-order (SZO) methods has recently been revived in black-box optimization scenarios such as adversarial black-box attacks to deep neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.