Search Results for author: Raphael Rubino

Found 30 papers, 2 papers with code

Combination of Neural Machine Translation Systems at WMT20

no code implementations WMT (EMNLP) 2020 Benjamin Marie, Raphael Rubino, Atsushi Fujita

This paper presents neural machine translation systems and their combination built for the WMT20 English-Polish and Japanese->English translation tasks.

Machine Translation NMT +2

Fusion of Self-supervised Learned Models for MOS Prediction

no code implementations11 Apr 2022 Zhengdong Yang, Wangjin Zhou, Chenhui Chu, Sheng Li, Raj Dabre, Raphael Rubino, Yi Zhao

This challenge aims to predict MOS scores of synthetic speech on two tracks, the main track and a more challenging sub-track: out-of-domain (OOD).

Scientific Credibility of Machine Translation Research: A Meta-Evaluation of 769 Papers

2 code implementations ACL 2021 Benjamin Marie, Atsushi Fujita, Raphael Rubino

MT evaluations in recent papers tend to copy and compare automatic metric scores from previous work to claim the superiority of a method or an algorithm without confirming neither exactly the same training, validating, and testing data have been used nor the metric scores are comparable.

Machine Translation Translation

Intermediate Self-supervised Learning for Machine Translation Quality Estimation

no code implementations COLING 2020 Raphael Rubino, Eiichiro Sumita

The proposed method does not rely on annotated data and is complementary to QE methods involving pre-trained sentence encoders and domain adaptation.

Domain Adaptation Language Modelling +3

Tagged Back-translation Revisited: Why Does It Really Work?

no code implementations ACL 2020 Benjamin Marie, Raphael Rubino, Atsushi Fujita

In this paper, we show that neural machine translation (NMT) systems trained on large back-translated data overfit some of the characteristics of machine-translated texts.

Machine Translation NMT +2

Balancing Cost and Benefit with Tied-Multi Transformers

no code implementations WS 2020 Raj Dabre, Raphael Rubino, Atsushi Fujita

We propose and evaluate a novel procedure for training multiple Transformers with tied parameters which compresses multiple models into one enabling the dynamic choice of the number of encoder and decoder layers during decoding.

Knowledge Distillation Machine Translation +2

INFODENS: An Open-source Framework for Learning Text Representations

1 code implementation16 Oct 2018 Ahmad Taie, Raphael Rubino, Josef van Genabith

The advent of representation learning methods enabled large performance gains on various language tasks, alleviating the need for manual feature engineering.

Feature Engineering General Classification +3

DFKI-MLT System Description for the WMT18 Automatic Post-editing Task

no code implementations WS 2018 Daria Pylypenko, Raphael Rubino

This paper presents the Automatic Post-editing (APE) systems submitted by the DFKI-MLT group to the WMT{'}18 APE shared task.

Automatic Post-Editing Translation +1

Findings of the WMT 2018 Shared Task on Automatic Post-Editing

no code implementations WS 2018 Rajen Chatterjee, Matteo Negri, Raphael Rubino, Marco Turchi

In the former subtask, characterized by original translations of lower quality, top results achieved impressive improvements, up to -6. 24 TER and +9. 53 BLEU points over the baseline {``}\textit{do-nothing}{''} system.

Automatic Post-Editing NMT +1

Modeling Diachronic Change in Scientific Writing with Information Density

no code implementations COLING 2016 Raphael Rubino, Stefania Degaetano-Ortlieb, Elke Teich, Josef van Genabith

In this paper we investigate the introduction of information theory inspired features to study long term diachronic change on three levels: lexis, part-of-speech and syntax.

General Classification Informativeness

Cannot find the paper you are looking for? You can Submit a new open access paper.