Search Results for author: Julia Kreutzer

Found 30 papers, 16 papers with code

Bandits Don’t Follow Rules: Balancing Multi-Facet Machine Translation with Multi-Armed Bandits

no code implementations Findings (EMNLP) 2021 Julia Kreutzer, David Vilar, Artem Sokolov

Training data for machine translation (MT) is often sourced from a multitude of large corpora that are multi-faceted in nature, e. g. containing contents from multiple domains or different levels of quality or complexity.

Machine Translation Multi-Armed Bandits +1

Can Multilinguality benefit Non-autoregressive Machine Translation?

no code implementations16 Dec 2021 Sweta Agrawal, Julia Kreutzer, Colin Cherry

Non-autoregressive (NAR) machine translation has recently achieved significant improvements, and now outperforms autoregressive (AR) models on some benchmarks, providing an efficient alternative to AR inference.

Machine Translation Translation

Bandits Don't Follow Rules: Balancing Multi-Facet Machine Translation with Multi-Armed Bandits

no code implementations13 Oct 2021 Julia Kreutzer, David Vilar, Artem Sokolov

Training data for machine translation (MT) is often sourced from a multitude of large corpora that are multi-faceted in nature, e. g. containing contents from multiple domains or different levels of quality or complexity.

Machine Translation Multi-Armed Bandits +1

Evaluating Multiway Multilingual NMT in the Turkic Languages

1 code implementation WMT (EMNLP) 2021 Jamshidbek Mirzakhalov, Anoop Babu, Aigiz Kunafin, Ahsan Wahab, Behzod Moydinboyev, Sardana Ivanova, Mokhiyakhon Uzokova, Shaxnoza Pulatova, Duygu Ataman, Julia Kreutzer, Francis Tyers, Orhan Firat, John Licato, Sriram Chellappan

Then, we train 26 bilingual baselines as well as a multi-way neural MT (MNMT) model using the corpus and perform an extensive analysis using automatic metrics as well as human evaluations.

Machine Translation

Modelling Latent Translations for Cross-Lingual Transfer

1 code implementation23 Jul 2021 Edoardo Maria Ponti, Julia Kreutzer, Ivan Vulić, Siva Reddy

To remedy this, we propose a new technique that integrates both steps of the traditional pipeline (translation and classification) into a single model, by treating the intermediate translations as a latent random variable.

Cross-Lingual Transfer Few-Shot Learning +4

Revisiting the Weaknesses of Reinforcement Learning for Neural Machine Translation

1 code implementation NAACL 2021 Samuel Kiegeland, Julia Kreutzer

Policy gradient algorithms have found wide adoption in NLP, but have recently become subject to criticism, doubting their suitability for NMT.

Domain Adaptation Machine Translation +1

Inference Strategies for Machine Translation with Conditional Masking

no code implementations EMNLP 2020 Julia Kreutzer, George Foster, Colin Cherry

Conditional masked language model (CMLM) training has proven successful for non-autoregressive and semi-autoregressive sequence generation tasks, such as machine translation.

Language Modelling Machine Translation +1

On Optimal Transformer Depth for Low-Resource Language Translation

1 code implementation9 Apr 2020 Elan van Biljon, Arnu Pretorius, Julia Kreutzer

Therefore, by showing that transformer models perform well (and often best) at low-to-moderate depth, we hope to convince fellow researchers to devote less computational resources, as well as time, to exploring overly large models during the development of these systems.

Machine Translation Translation

Joey NMT: A Minimalist NMT Toolkit for Novices

8 code implementations IJCNLP 2019 Julia Kreutzer, Jasmijn Bastings, Stefan Riezler

We present Joey NMT, a minimalist neural machine translation toolkit based on PyTorch that is specifically designed for novices.

Machine Translation Translation

Self-Regulated Interactive Sequence-to-Sequence Learning

7 code implementations ACL 2019 Julia Kreutzer, Stefan Riezler

Not all types of supervision signals are created equal: Different types of feedback have different costs and effects on learning.

Active Learning Machine Translation +1

Explaining and Generalizing Back-Translation through Wake-Sleep

no code implementations12 Jun 2018 Ryan Cotterell, Julia Kreutzer

Back-translation has become a commonly employed heuristic for semi-supervised neural machine translation.

Machine Translation Translation

Reliability and Learnability of Human Bandit Feedback for Sequence-to-Sequence Reinforcement Learning

1 code implementation ACL 2018 Julia Kreutzer, Joshua Uyheng, Stefan Riezler

We present a study on reinforcement learning (RL) from human bandit feedback for sequence-to-sequence learning, exemplified by the task of bandit neural machine translation (NMT).

Machine Translation Translation

A Reinforcement Learning Approach to Interactive-Predictive Neural Machine Translation

1 code implementation3 May 2018 Tsz Kin Lam, Julia Kreutzer, Stefan Riezler

We present an approach to interactive-predictive neural machine translation that attempts to reduce human effort from three directions: Firstly, instead of requiring humans to select, correct, or delete segments, we employ the idea of learning from human reinforcements in form of judgments on the quality of partial translations.

Machine Translation Translation

Can Neural Machine Translation be Improved with User Feedback?

no code implementations NAACL 2018 Julia Kreutzer, Shahram Khadivi, Evgeny Matusov, Stefan Riezler

We present the first real-world application of methods for improving neural machine translation (NMT) with human reinforcement, based on explicit and implicit user feedback collected on the eBay e-commerce platform.

Machine Translation Translation

Bandit Structured Prediction for Neural Sequence-to-Sequence Learning

1 code implementation ACL 2017 Julia Kreutzer, Artem Sokolov, Stefan Riezler

Bandit structured prediction describes a stochastic optimization framework where learning is performed from partial feedback.

Domain Adaptation Machine Translation +3

Stochastic Structured Prediction under Bandit Feedback

1 code implementation NeurIPS 2016 Artem Sokolov, Julia Kreutzer, Christopher Lo, Stefan Riezler

Stochastic structured prediction under bandit feedback follows a learning protocol where on each of a sequence of iterations, the learner receives an input, predicts an output structure, and receives partial feedback in form of a task loss evaluation of the predicted structure.

Structured Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.