1155 papers with code • 55 benchmarks • 51 datasets
Machine translation is the task of translating a sentence in a source language to a different target language
( Image credit: Google seq2seq )
We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.
Ranked #1 on CCG Supertagging on CCGbank
Several mechanisms to focus attention of a neural network on selected parts of its input or memory have been used successfully in deep learning models in recent years.
Ranked #47 on Machine Translation on WMT2014 English-French
Dictionaries and phrase tables are the basis of modern statistical machine translation systems.
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.
Ranked #1 on Machine Translation on IWSLT2015 English-German
Existing work in translation demonstrated the potential of massively multilingual machine translation by training a single model able to translate between any pair of languages.
We introduce fairseq S2T, a fairseq extension for speech-to-text (S2T) modeling tasks such as end-to-end speech recognition and speech-to-text translation.
Ranked #4 on Speech-to-Text Translation on MuST-C EN->DE
Recent work demonstrates the potential of multilingual pretraining of creating one model that can be used for various tasks in different languages.
We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.
Ranked #3 on Text Summarization on X-Sum