Search Results for author: John DeNero

Found 24 papers, 5 papers with code

Automatic Bilingual Markup Transfer

2 code implementations Findings (EMNLP) 2021 Thomas Zenkel, Joern Wuebker, John DeNero

We describe the task of bilingual markup transfer, which involves placing markup tags from a source sentence into a fixed target translation.

Machine Translation Translation

Putting the Con in Context: Identifying Deceptive Actors in the Game of Mafia

no code implementations NAACL 2022 Samee Ibraheem, Gaoyue Zhou, John DeNero

In this work, we analyze the effect of speaker role on language use through the game of Mafia, in which participants are assigned either an honest or a deceptive role.

Deception Detection Dialogue Understanding +3

Automatic Correction of Human Translations

1 code implementation NAACL 2022 Jessy Lin, Geza Kovacs, Aditya Shastry, Joern Wuebker, John DeNero

We show that human errors in TEC exhibit a more diverse range of errors and far fewer translation fluency errors than the MT errors in automatic post-editing datasets, suggesting the need for dedicated TEC models that are specialized to correct human errors.

Automatic Post-Editing Translation

Enriched Annotations for Tumor Attribute Classification from Pathology Reports with Limited Labeled Data

no code implementations15 Dec 2020 Nick Altieri, Briton Park, Mara Olson, John DeNero, Anobel Odisho, Bin Yu

Precision medicine has the potential to revolutionize healthcare, but much of the data for patients is locked away in unstructured free-text, limiting research and delivery of effective personalized treatments.

General Classification Interpretable Machine Learning

End-to-End Neural Word Alignment Outperforms GIZA++

no code implementations ACL 2020 Thomas Zenkel, Joern Wuebker, John DeNero

Although unnecessary for training neural MT models, word alignment still plays an important role in interactive applications of neural machine translation, such as annotation transfer and lexicon injection.

Machine Translation Translation +1

Measuring Immediate Adaptation Performance for Neural Machine Translation

no code implementations NAACL 2019 Patrick Simianer, Joern Wuebker, John DeNero

Incremental domain adaptation, in which a system learns from the correct output for each input immediately after making its prediction for that input, can dramatically improve system performance for interactive machine translation.

Domain Adaptation Machine Translation +2

Adding Interpretable Attention to Neural Translation Models Improves Word Alignment

1 code implementation31 Jan 2019 Thomas Zenkel, Joern Wuebker, John DeNero

Multi-layer models with multiple attention heads per layer provide superior translation quality compared to simpler and shallower models, but determining what source context is most relevant to each target word is more challenging as a result.

Machine Translation Translation +1

Guiding Policies with Language via Meta-Learning

1 code implementation ICLR 2019 John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, Sergey Levine

However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task.

Imitation Learning Instruction Following +1

Compact Personalized Models for Neural Machine Translation

no code implementations EMNLP 2018 Joern Wuebker, Patrick Simianer, John DeNero

We propose and compare methods for gradient-based domain adaptation of self-attentive neural machine translation models.

Domain Adaptation Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.