Search Results for author: Philipp Koehn

Found 128 papers, 21 papers with code

Machine Translation Quality and Post-Editor Productivity

no code implementations AMTA 2016 Marina Sanchez-Torron, Philipp Koehn

We assessed how different machine translation (MT) systems affect the post-editing (PE) process and product of professional English–Spanish translators.

Machine Translation Translation

Learning Curricula for Multilingual Neural Machine Translation Training

no code implementations MTSummit 2021 Gaurav Kumar, Philipp Koehn, Sanjeev Khudanpur

Low-resource Multilingual Neural Machine Translation (MNMT) is typically tasked with improving the translation performance on one or more language pairs with the aid of high-resource language pairs.

Machine Translation Translation

Neural Interactive Translation Prediction

no code implementations AMTA 2016 Rebecca Knowles, Philipp Koehn

We present an interactive translation prediction method based on neural machine translation.

Machine Translation Translation

Findings of the 2021 Conference on Machine Translation (WMT21)

no code implementations WMT (EMNLP) 2021 Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondřej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-Jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, Marcos Zampieri

This paper presents the results of the newstranslation task, the multilingual low-resourcetranslation for Indo-European languages, thetriangular translation task, and the automaticpost-editing task organised as part of the Con-ference on Machine Translation (WMT) 2021. In the news task, participants were asked tobuild machine translation systems for any of10 language pairs, to be evaluated on test setsconsisting mainly of news stories.

Machine Translation Translation

Dual Conditional Cross Entropy Scores and LASER Similarity Scores for the WMT20 Parallel Corpus Filtering Shared Task

no code implementations WMT (EMNLP) 2020 Felicia Koerner, Philipp Koehn

This paper describes our submission to the WMT20 Parallel Corpus Filtering and Alignment for Low-Resource Conditions Shared Task.

An exploratory approach to the Parallel Corpus Filtering shared task WMT20

no code implementations WMT (EMNLP) 2020 Ankur Kejriwal, Philipp Koehn

In this document we describe our submission to the parallel corpus filtering task using multilingual word embedding, language models and an ensemble of pre and post filtering rules.

Findings of the WMT 2020 Shared Task on Parallel Corpus Filtering and Alignment

no code implementations WMT (EMNLP) 2020 Philipp Koehn, Vishrav Chaudhary, Ahmed El-Kishky, Naman Goyal, Peng-Jen Chen, Francisco Guzmán

Following two preceding WMT Shared Task on Parallel Corpus Filtering (Koehn et al., 2018, 2019), we posed again the challenge of assigning sentence-level quality scores for very noisy corpora of sentence pairs crawled from the web, with the goal of sub-selecting the highest-quality data to be used to train ma-chine translation systems.

Translation

An Alignment-Based Approach to Semi-Supervised Bilingual Lexicon Induction with Small Parallel Corpora

1 code implementation MTSummit 2021 Kelly Marchisio, Philipp Koehn, Conghao Xiong

Aimed at generating a seed lexicon for use in downstream natural language tasks and unsupervised methods for bilingual lexicon induction have received much attention in the academic literature recently.

Bilingual Lexicon Induction Translation

Consistent Human Evaluation of Machine Translation across Language Pairs

no code implementations17 May 2022 Daniel Licht, Cynthia Gao, Janice Lam, Francisco Guzman, Mona Diab, Philipp Koehn

Obtaining meaningful quality scores for machine translation systems through human evaluation remains a challenge given the high variability between human evaluators, partly due to subjective expectations for translation quality for different language pairs.

14 Machine Translation +1

Learn To Remember: Transformer with Recurrent Memory for Document-Level Machine Translation

no code implementations3 May 2022 Yukun Feng, Feng Li, Ziang Song, Boyuan Zheng, Philipp Koehn

We conduct experiments on three popular datasets for document-level machine translation and our model has an average improvement of 0. 91 s-BLEU over the sentence-level baseline.

Document Level Machine Translation Machine Translation +1

Data Selection Curriculum for Neural Machine Translation

no code implementations25 Mar 2022 Tasnim Mohiuddin, Philipp Koehn, Vishrav Chaudhary, James Cross, Shruti Bhosale, Shafiq Joty

In this work, we introduce a two-stage curriculum training framework for NMT where we fine-tune a base NMT model on subsets of data, selected by both deterministic scoring using pre-trained methods and online scoring that considers prediction scores of the emerging NMT model.

Machine Translation Translation

Alternative Input Signals Ease Transfer in Multilingual Machine Translation

no code implementations ACL 2022 Simeng Sun, Angela Fan, James Cross, Vishrav Chaudhary, Chau Tran, Philipp Koehn, Francisco Guzman

Further, we find that incorporating alternative inputs via self-ensemble can be particularly effective when training set is small, leading to +5 BLEU when only 5% of the total training data is accessible.

Machine Translation Translation

Contrastive Clustering to Mine Pseudo Parallel Data for Unsupervised Translation

no code implementations ICLR 2022 Xuan-Phi Nguyen, Hongyu Gong, Yun Tang, Changhan Wang, Philipp Koehn, Shafiq Joty

Modern unsupervised machine translation systems mostly train their models by generating synthetic parallel training data from large unlabeled monolingual corpora of different languages through various means, such as iterative back-translation.

14 Translation +1

Levenshtein Training for Word-level Quality Estimation

1 code implementation EMNLP 2021 Shuoyang Ding, Marcin Junczys-Dowmunt, Matt Post, Philipp Koehn

We propose a novel scheme to use the Levenshtein Transformer to perform the task of word-level quality estimation.

Transfer Learning Translation

Facebook AI WMT21 News Translation Task Submission

no code implementations6 Aug 2021 Chau Tran, Shruti Bhosale, James Cross, Philipp Koehn, Sergey Edunov, Angela Fan

We describe Facebook's multilingual model submission to the WMT2021 shared task on news translation.

14 Translation

Cross-Lingual BERT Contextual Embedding Space Mapping with Isotropic and Isometric Conditions

1 code implementation19 Jul 2021 Haoran Xu, Philipp Koehn

Typically, a linearly orthogonal transformation mapping is learned by aligning static type-level embeddings to build a shared semantic space.

On the Evaluation of Machine Translation for Terminology Consistency

1 code implementation22 Jun 2021 Md Mahfuz ibn Alam, Antonios Anastasopoulos, Laurent Besacier, James Cross, Matthias Gallé, Philipp Koehn, Vassilina Nikoulina

As neural machine translation (NMT) systems become an important part of professional translator pipelines, a growing body of work focuses on combining NMT with terminologies.

Domain Adaptation Machine Translation +1

Embedding-Enhanced Giza++: Improving Alignment in Low- and High- Resource Scenarios Using Embedding Space Geometry

no code implementations18 Apr 2021 Kelly Marchisio, Conghao Xiong, Philipp Koehn

A popular natural language processing task decades ago, word alignment has been dominated until recently by GIZA++, a statistical method based on the 30-year-old IBM models.

Machine Translation Translation +1

Evaluating Saliency Methods for Neural Language Models

1 code implementation NAACL 2021 Shuoyang Ding, Philipp Koehn

Saliency methods are widely used to interpret neural network predictions, but different variants of saliency methods often disagree even on the interpretations of the same prediction made by the same model.

Learning Feature Weights using Reward Modeling for Denoising Parallel Corpora

no code implementations WMT (EMNLP) 2021 Gaurav Kumar, Philipp Koehn, Sanjeev Khudanpur

These feature weights which are optimized directly for the task of improving translation performance, are used to score and filter sentences in the noisy corpora more effectively.

Denoising Language Modelling +2

Learning Policies for Multilingual Training of Neural Machine Translation Systems

no code implementations11 Mar 2021 Gaurav Kumar, Philipp Koehn, Sanjeev Khudanpur

Low-resource Multilingual Neural Machine Translation (MNMT) is typically tasked with improving the translation performance on one or more language pairs with the aid of high-resource language pairs.

Machine Translation Translation

Zero-Shot Cross-Lingual Dependency Parsing through Contextual Embedding Transformation

1 code implementation EACL (AdaptNLP) 2021 Haoran Xu, Philipp Koehn

Linear embedding transformation has been shown to be effective for zero-shot cross-lingual transfer tasks and achieve surprisingly promising results.

Dependency Parsing Translation +1

Simulated Multiple Reference Training Improves Low-Resource Machine Translation

1 code implementation EMNLP 2020 Huda Khayrallah, Brian Thompson, Matt Post, Philipp Koehn

Many valid translations exist for a given sentence, yet machine translation (MT) is trained with a single reference translation, exacerbating data sparsity in low-resource settings.

Machine Translation Translation

Exploiting Sentence Order in Document Alignment

1 code implementation EMNLP 2020 Brian Thompson, Philipp Koehn

We present a simple document alignment method that incorporates sentence order information in both candidate generation and candidate re-scoring.

When Does Unsupervised Machine Translation Work?

no code implementations WMT (EMNLP) 2020 Kelly Marchisio, Kevin Duh, Philipp Koehn

We additionally find that unsupervised MT performance declines when source and target languages use different scripts, and observe very poor performance on authentic low-resource language pairs.

Translation Unsupervised Machine Translation

CCAligned: A Massive Collection of Cross-Lingual Web-Document Pairs

no code implementations EMNLP 2020 Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzman, Philipp Koehn

We mine sixty-eight snapshots of the Common Crawl corpus and identify web document pairs that are translations of each other.

Spelling-Aware Construction of Macaronic Texts for Teaching Foreign-Language Vocabulary

no code implementations IJCNLP 2019 Adithya Renduchintala, Philipp Koehn, Jason Eisner

We present a machine foreign-language teacher that modifies text in a student{'}s native language (L1) by replacing some word tokens with glosses in a foreign language (L2), in such a way that the student can acquire L2 vocabulary simply by reading the resulting macaronic text.

Language Modelling

Vecalign: Improved Sentence Alignment in Linear Time and Space

no code implementations IJCNLP 2019 Brian Thompson, Philipp Koehn

It substantially outperforms the popular Hunalign toolkit at recovering Bible verse alignments in medium- to low-resource language pairs, and it improves downstream MT quality by 1. 7 and 1. 6 BLEU in Sinhala-English and Nepali-English, respectively, compared to the Hunalign-based Paracrawl pipeline.

Machine Translation Sentence Embeddings +1

Johns Hopkins University Submission for WMT News Translation Task

no code implementations WS 2019 Kelly Marchisio, Yash Kumar Lal, Philipp Koehn

We describe the work of Johns Hopkins University for the shared task of news translation organized by the Fourth Conference on Machine Translation (2019).

Machine Translation Translation

Simple Construction of Mixed-Language Texts for Vocabulary Learning

no code implementations WS 2019 Adithya Renduchintala, Philipp Koehn, Jason Eisner

We accomplish this by modifying a cloze language model to incrementally learn new vocabulary items, and use this language model as a proxy for the word guessing and learning ability of real students.

Language Modelling

Findings of the WMT 2019 Shared Task on Parallel Corpus Filtering for Low-Resource Conditions

no code implementations WS 2019 Philipp Koehn, Francisco Guzm{\'a}n, Vishrav Chaudhary, Juan Pino

Following the WMT 2018 Shared Task on Parallel Corpus Filtering, we posed the challenge of assigning sentence-level quality scores for very noisy corpora of sentence pairs crawled from the web, with the goal of sub-selecting 2{\%} and 10{\%} of the highest-quality data to be used to train machine translation systems.

Machine Translation Translation

De-Mixing Sentiment from Code-Mixed Text

no code implementations ACL 2019 Yash Kumar Lal, Vaibhav Kumar, Mrinal Dhar, Manish Shrivastava, Philipp Koehn

The Collective Encoder captures the overall sentiment of the sentence, while the Specific Encoder utilizes an attention mechanism in order to focus on individual sentiment-bearing sub-words.

Sentiment Analysis Word Embeddings

Saliency-driven Word Alignment Interpretation for Neural Machine Translation

1 code implementation WS 2019 Shuoyang Ding, Hainan Xu, Philipp Koehn

Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments.

Machine Translation Translation +1

Translationese in Machine Translation Evaluation

no code implementations24 Jun 2019 Yvette Graham, Barry Haddow, Philipp Koehn

Finally, we provide a comprehensive check-list for future machine translation evaluation.

Machine Translation Translation

Parallelizable Stack Long Short-Term Memory

1 code implementation WS 2019 Shuoyang Ding, Philipp Koehn

Stack Long Short-Term Memory (StackLSTM) is useful for various applications such as parsing and string-to-tree neural machine translation, but it is also known to be notoriously difficult to parallelize for GPU training due to the fact that the computations are dependent on discrete operations.

Machine Translation Translation

Context and Copying in Neural Machine Translation

no code implementations EMNLP 2018 Rebecca Knowles, Philipp Koehn

In this work, we show that they learn to copy words based on both the context in which the words appear as well as features of the words themselves.

Machine Translation Translation

Findings of the WMT 2018 Shared Task on Parallel Corpus Filtering

no code implementations WS 2018 Philipp Koehn, Huda Khayrallah, Kenneth Heafield, Mikel L. Forcada

We posed the shared task of assigning sentence-level quality scores for a very noisy corpus of sentence pairs crawled from the web, with the goal of sub-selecting 1{\%} and 10{\%} of high-quality data to be used to train machine translation systems.

Machine Translation Outlier Detection +1

The JHU Machine Translation Systems for WMT 2018

no code implementations WS 2018 Philipp Koehn, Kevin Duh, Brian Thompson

We report on the efforts of the Johns Hopkins University to develop neural machine translation systems for the shared task for news translation organized around the Conference for Machine Translation (WMT) 2018.

Machine Translation Translation

Freezing Subnetworks to Analyze Domain Adaptation in Neural Machine Translation

1 code implementation WS 2018 Brian Thompson, Huda Khayrallah, Antonios Anastasopoulos, Arya D. McCarthy, Kevin Duh, Rebecca Marvin, Paul McNamee, Jeremy Gwinnup, Tim Anderson, Philipp Koehn

To better understand the effectiveness of continued training, we analyze the major components of a neural machine translation system (the encoder, decoder, and each embedding space) and consider each component's contribution to, and capacity for, domain adaptation.

Domain Adaptation Machine Translation +1

Character-Aware Decoder for Translation into Morphologically Rich Languages

no code implementations WS 2019 Adithya Renduchintala, Pamela Shapiro, Kevin Duh, Philipp Koehn

Neural machine translation (NMT) systems operate primarily on words (or sub-words), ignoring lower-level patterns of morphology.

14 Machine Translation +1

Document-Level Adaptation for Neural Machine Translation

no code implementations WS 2018 Sachith Sri Ram Kothur, Rebecca Knowles, Philipp Koehn

It is common practice to adapt machine translation systems to novel domains, but even a well-adapted system may be able to perform better on a particular document if it were to learn from a translator{'}s corrections within the document itself.

Machine Translation Translation +1

Regularized Training Objective for Continued Training for Domain Adaptation in Neural Machine Translation

1 code implementation WS 2018 Huda Khayrallah, Brian Thompson, Kevin Duh, Philipp Koehn

Supervised domain adaptation{---}where a large generic corpus and a smaller in-domain corpus are both available for training{---}is a challenge for neural machine translation (NMT).

Domain Adaptation Machine Translation +1

Iterative Back-Translation for Neural Machine Translation

no code implementations WS 2018 Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, Trevor Cohn

We present iterative back-translation, a method for generating increasingly better synthetic parallel data from monolingual data to train neural machine translation systems.

Machine Translation Translation

On the Impact of Various Types of Noise on Neural Machine Translation

1 code implementation WS 2018 Huda Khayrallah, Philipp Koehn

We examine how various types of noise in the parallel training data impact the quality of neural machine translation systems.

Machine Translation Translation

Neural Machine Translation

5 code implementations22 Sep 2017 Philipp Koehn

Draft of textbook chapter on neural machine translation.

 Ranked #1 on Machine Translation on 20NEWS (using extra training data)

Machine Translation Translation

Knowledge Tracing in Sequential Learning of Inflected Vocabulary

no code implementations CONLL 2017 Adithya Renduchintala, Philipp Koehn, Jason Eisner

We present a feature-rich knowledge tracing method that captures a student{'}s acquisition and retention of knowledge during a foreign language phrase learning task.

Knowledge Tracing Structured Prediction

Six Challenges for Neural Machine Translation

no code implementations WS 2017 Philipp Koehn, Rebecca Knowles

We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search.

Machine Translation Translation +1

Predicting Target Language CCG Supertags Improves Neural Machine Translation

no code implementations WS 2017 Maria Nadejde, Siva Reddy, Rico Sennrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, Alexandra Birch

Our results on WMT data show that explicitly modeling target-syntax improves machine translation quality for German->English, a high-resource pair, and for Romanian->English, a low-resource pair and also several syntactic phenomena including prepositional phrase attachment.

Machine Translation Prepositional Phrase Attachment +1

Cannot find the paper you are looking for? You can Submit a new open access paper.