Search Results for author: Antonio Kr{\"u}ger

Found 5 papers, 1 papers with code

Mid-Air Hand Gestures for Post-Editing of Machine Translation

1 code implementation ACL 2021 Rashad Albo Jamara, Nico Herbig, Antonio Kr{\"u}ger, Josef van Genabith

Here, we present the first study that investigates the usefulness of mid-air hand gestures in combination with the keyboard (GK) for text editing in PE of MT.

Machine Translation Translation

MMPE: A Multi-Modal Interface using Handwriting, Touch Reordering, and Speech Commands for Post-Editing Machine Translation

no code implementations ACL 2020 Nico Herbig, Santanu Pal, Tim D{\"u}wel, Kalliopi Meladaki, Mahsa Monshizadeh, Vladislav Hnatovskiy, Antonio Kr{\"u}ger, Josef van Genabith

The shift from traditional translation to post-editing (PE) of machine-translated (MT) text can save time and reduce errors, but it also affects the design of translation interfaces, as the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals.

Machine Translation Translation

MMPE: A Multi-Modal Interface for Post-Editing Machine Translation

no code implementations ACL 2020 Nico Herbig, Tim D{\"u}wel, Santanu Pal, Kalliopi Meladaki, Mahsa Monshizadeh, Antonio Kr{\"u}ger, Josef van Genabith

On the other hand, speech and multi-modal combinations of select {\&} speech are considered suitable for replacements and insertions but offer less potential for deletion and reordering.

Machine Translation Translation

USAAR-DFKI -- The Transference Architecture for English--German Automatic Post-Editing

no code implementations WS 2019 Santanu Pal, Hongfei Xu, Nico Herbig, Antonio Kr{\"u}ger, Josef van Genabith

In this paper we present an English{--}German Automatic Post-Editing (APE) system called transference, submitted to the APE Task organized at WMT 2019.

Automatic Post-Editing Translation

A Transformer-Based Multi-Source Automatic Post-Editing System

no code implementations WS 2018 Santanu Pal, Nico Herbig, Antonio Kr{\"u}ger, Josef van Genabith

The proposed model is an extension of the transformer architecture: two separate self-attention-based encoders encode the machine translation output (mt) and the source (src), followed by a joint encoder that attends over a combination of these two encoded sequences (encsrc and encmt) for generating the post-edited sentence.

Automatic Post-Editing NMT +2

Cannot find the paper you are looking for? You can Submit a new open access paper.