1114 papers with code • 55 benchmarks • 50 datasets
Machine translation is the task of translating a sentence in a source language to a different target language
( Image credit: Google seq2seq )
This paper describes Facebook FAIR's submission to the WMT19 shared news translation task.
Ranked #1 on Machine Translation on WMT2019 English-German
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.
Ranked #1 on Language Modelling on enwik8 (using extra training data)
On unsupervised machine translation, we obtain 34. 3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU.
Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs.
Ranked #2 on Machine Translation on WMT2016 English-Russian
We achieve this by decomposing the text-editing task into two sub-tasks: tagging to decide on the subset of input tokens and their order in the output text and insertion to in-fill the missing tokens in the output not present in the input.
As a result, these conventional methods are less effective than methods that leverage the structures, such as SpatialDropout and DropBlock, which randomly drop the values at certain contiguous areas in the hidden states and setting them to zero.
Ranked #1 on Image Classification on cifar-10,4000