1159 papers with code • 55 benchmarks • 52 datasets
Machine translation is the task of translating a sentence in a source language to a different target language
( Image credit: Google seq2seq )
Pre-training models on vast quantities of unlabeled data has emerged as an effective approach to improving accuracy on many NLP tasks.
Ranked #1 on Machine Translation on WMT2016 Romanian-English (using extra training data)
Recent studies have demonstrated the cross-lingual alignment ability of multilingual pretrained language models.
Representing text at the level of bytes and using the 256 byte set as vocabulary is a potential solution to this issue.
Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time.
Overparameterized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering.
The state of the art in machine translation (MT) is governed by neural approaches, which typically provide superior translation accuracy over statistical approaches.
Previous work on neural noisy channel modeling relied on latent variable models that incrementally process the source and target sentence.