Search Results for author: Jean Senellart

Found 31 papers, 4 papers with code

Lexical Micro-adaptation for Neural Machine Translation

no code implementations EMNLP (IWSLT) 2019 Jitao Xu, Josep Crego, Jean Senellart

This work is inspired by a typical machine translation industry scenario in which translators make use of in-domain data for facilitating translation of similar or repeating sentences.

Machine Translation NMT +1

Robust Translation of French Live Speech Transcripts

no code implementations AMTA 2022 Elise Bertin-Lemée, Guillaume Klein, Josep Crego, Jean Senellart

Despite a narrowed performance gap with direct approaches, cascade solutions, involving automatic speech recognition (ASR) and machine translation (MT) are still largely employed in speech translation (ST).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Priming Neural Machine Translation

no code implementations WMT (EMNLP) 2020 Minh Quang Pham, Jitao Xu, Josep Crego, François Yvon, Jean Senellart

Priming is a well known and studied psychology phenomenon based on the prior presentation of one stimulus (cue) to influence the processing of a response.

Machine Translation NMT +1

Integrating Domain Terminology into Neural Machine Translation

no code implementations COLING 2020 Elise Michon, Josep Crego, Jean Senellart

This paper extends existing work on terminology integration into Neural Machine Translation, a common industrial practice to dynamically adapt translation to a specific domain.

Machine Translation Translation

Boosting Neural Machine Translation with Similar Translations

no code implementations AMTA 2022 Jitao Xu, Josep Crego, Jean Senellart

This paper explores data augmentation methods for training Neural Machine Translation to make use of similar translations, in a comparable way a human translator employs fuzzy matches.

Data Augmentation Machine Translation +2

SYSTRAN @ WNGT 2019: DGT Task

no code implementations WS 2019 Li Gong, Josep Crego, Jean Senellart

This paper describes SYSTRAN participation to the Document-level Generation and Trans- lation (DGT) Shared Task of the 3rd Workshop on Neural Generation and Translation (WNGT 2019).

Translation

Enhanced Transformer Model for Data-to-Text Generation

no code implementations WS 2019 Li Gong, Josep Crego, Jean Senellart

Neural models have recently shown significant progress on data-to-text generation tasks in which descriptive texts are generated conditioned on database records.

Data Augmentation Data-to-Text Generation +1

SYSTRAN Participation to the WMT2018 Shared Task on Parallel Corpus Filtering

no code implementations WS 2018 MinhQuang Pham, Josep Crego, Jean Senellart

This paper describes the participation of SYSTRAN to the shared task on parallel corpus filtering at the Third Conference on Machine Translation (WMT 2018).

Feature Engineering Machine Translation +3

OpenNMT: Open-source Toolkit for Neural Machine Translation

no code implementations12 Sep 2017 Guillaume Klein, Yoon Kim, Yuntian Deng, Josep Crego, Jean Senellart, Alexander M. Rush

We introduce an open-source toolkit for neural machine translation (NMT) to support research into model architectures, feature representations, and source modalities, while maintaining competitive performance, modularity and reasonable training requirements.

Machine Translation NMT +1

SYSTRAN Purely Neural MT Engines for WMT2017

no code implementations WS 2017 Yongchao Deng, Jungi Kim, Guillaume Klein, Catherine Kobus, Natalia Segal, Christophe Servan, Bo wang, Dakun Zhang, Josep Crego, Jean Senellart

This paper describes SYSTRAN's systems submitted to the WMT 2017 shared news translation task for English-German, in both translation directions.

Machine Translation Translation

Neural Machine Translation from Simplified Translations

no code implementations19 Dec 2016 Josep Crego, Jean Senellart

We conduct preliminary experiments showing that translation complexity is actually reduced in a translation of a source bi-text compared to the target reference of the bi-text while using a neural machine translation (NMT) system learned on the exact same bi-text.

Knowledge Distillation Machine Translation +3

Domain Control for Neural Machine Translation

no code implementations RANLP 2017 Catherine Kobus, Josep Crego, Jean Senellart

The presented approach shows quality improvements when compared to dedicated domains translating on any of the covered domains and even on out-of-domain data.

Domain Adaptation Machine Translation +3

SYSTRAN's Pure Neural Machine Translation Systems

no code implementations18 Oct 2016 Josep Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo wang, Jin Yang, Dakun Zhang, Jing Zhou, Peter Zoldan

Since the first online demonstration of Neural Machine Translation (NMT) by LISA, NMT development has recently moved from laboratory to production systems as demonstrated by several entities announcing roll-out of NMT engines to replace their existing technologies.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.