23 papers with code • 0 benchmarks • 9 datasets
Automatic post-editing (APE) is used to correct errors in the translation made by the machine translation systems.
These leaderboards are used to track progress in Automatic Post-Editing
We achieve this by decomposing the text-editing task into two sub-tasks: tagging to decide on the subset of input tokens and their order in the output text and insertion to in-fill the missing tokens in the output not present in the input.
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities.
Ensembling Factored Neural Machine Translation Models for Automatic Post-Editing and Quality Estimation
This work presents a novel approach to Automatic Post-Editing (APE) and Word-Level Quality Estimation (QE) using ensembles of specialized Neural Machine Translation (NMT) systems.
Automatic post-editing (APE) systems aim to correct the systematic errors made by machine translators.
Transliterating named entities from one language into another can be approached as neural machine translation (NMT) problem, for which we use deep attentional RNN encoder-decoder models.
Automated Post-Editing (PE) is the task of automatically correct common and repetitive errors found in machine translation (MT) output.