no code implementations • AMTA 2022 • Geza Kovacs, John DeNero
With the internet growing increasingly multilingual, it is important to consider translating websites.
2 code implementations • Findings (EMNLP) 2021 • Thomas Zenkel, Joern Wuebker, John DeNero
We describe the task of bilingual markup transfer, which involves placing markup tags from a source sentence into a fixed target translation.
no code implementations • NAACL (TeachingNLP) 2021 • David Gaddy, Daniel Fried, Nikita Kitaev, Mitchell Stern, Rodolfo Corona, John DeNero, Dan Klein
We present a set of assignments for a graduate-level NLP course.
no code implementations • NAACL 2022 • Samee Ibraheem, Gaoyue Zhou, John DeNero
In this work, we analyze the effect of speaker role on language use through the game of Mafia, in which participants are assigned either an honest or a deceptive role.
1 code implementation • NAACL 2022 • Jessy Lin, Geza Kovacs, Aditya Shastry, Joern Wuebker, John DeNero
We show that human errors in TEC exhibit a more diverse range of errors and far fewer translation fluency errors than the MT errors in automatic post-editing datasets, suggesting the need for dedicated TEC models that are specialized to correct human errors.
no code implementations • 15 Dec 2020 • Nick Altieri, Briton Park, Mara Olson, John DeNero, Anobel Odisho, Bin Yu
Precision medicine has the potential to revolutionize healthcare, but much of the data for patients is locked away in unstructured free-text, limiting research and delivery of effective personalized treatments.
1 code implementation • EMNLP 2020 • Kevin Yang, Violet Yao, John DeNero, Dan Klein
We propose an efficient batching strategy for variable-length decoding on GPU architectures.
no code implementations • ACL 2020 • Thomas Zenkel, Joern Wuebker, John DeNero
Although unnecessary for training neural MT models, word alignment still plays an important role in interactive applications of neural machine translation, such as annotation transfer and lexicon injection.
no code implementations • NAACL 2019 • Patrick Simianer, Joern Wuebker, John DeNero
Incremental domain adaptation, in which a system learns from the correct output for each input immediately after making its prediction for that input, can dramatically improve system performance for interactive machine translation.
1 code implementation • 31 Jan 2019 • Thomas Zenkel, Joern Wuebker, John DeNero
Multi-layer models with multiple attention heads per layer provide superior translation quality compared to simpler and shallower models, but determining what source context is most relevant to each target word is more challenging as a result.
1 code implementation • ICLR 2019 • John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, Sergey Levine
However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task.
no code implementations • EMNLP 2018 • Joern Wuebker, Patrick Simianer, John DeNero
We propose and compare methods for gradient-based domain adaptation of self-attentive neural machine translation models.