2 code implementations • Findings (EMNLP) 2021 • Thomas Zenkel, Joern Wuebker, John DeNero
We describe the task of bilingual markup transfer, which involves placing markup tags from a source sentence into a fixed target translation.
1 code implementation • NAACL 2022 • Jessy Lin, Geza Kovacs, Aditya Shastry, Joern Wuebker, John DeNero
We show that human errors in TEC exhibit a more diverse range of errors and far fewer translation fluency errors than the MT errors in automatic post-editing datasets, suggesting the need for dedicated TEC models that are specialized to correct human errors.
no code implementations • 11 Nov 2020 • Samuel Läubli, Patrick Simianer, Joern Wuebker, Geza Kovacs, Rico Sennrich, Spence Green
Widely used computer-aided translation (CAT) tools divide documents into segments such as sentences and arrange them in a side-by-side, spreadsheet-like view.
no code implementations • ACL 2020 • Thomas Zenkel, Joern Wuebker, John DeNero
Although unnecessary for training neural MT models, word alignment still plays an important role in interactive applications of neural machine translation, such as annotation transfer and lexicon injection.
no code implementations • NAACL 2019 • Patrick Simianer, Joern Wuebker, John DeNero
Incremental domain adaptation, in which a system learns from the correct output for each input immediately after making its prediction for that input, can dramatically improve system performance for interactive machine translation.
1 code implementation • 31 Jan 2019 • Thomas Zenkel, Joern Wuebker, John DeNero
Multi-layer models with multiple attention heads per layer provide superior translation quality compared to simpler and shallower models, but determining what source context is most relevant to each target word is more challenging as a result.
no code implementations • WS 2016 • Yunsu Kim, Andreas Guta, Joern Wuebker, Hermann Ney
This work systematically analyzes the smoothing effect of vocabulary reduction for phrase translation models.
no code implementations • EMNLP 2018 • Joern Wuebker, Patrick Simianer, John DeNero
We propose and compare methods for gradient-based domain adaptation of self-attentive neural machine translation models.