Translation

3182 papers with code • 7 benchmarks • 15 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Translation models and implementations
25 papers
1,206
18 papers
29,225
13 papers
124,793
See all 26 libraries.

Most implemented papers

Spatial Transformer Networks

tensorpack/tensorpack NeurIPS 2015

Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner.

Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation

graykode/nlp-tutorial 3 Jun 2014

In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN).

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

huggingface/transformers ACL 2020

We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.

Convolutional Sequence to Sequence Learning

facebookresearch/fairseq ICML 2017

The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks.

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

eriklindernoren/PyTorch-GAN CVPR 2018

To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model.

An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling

locuslab/TCN 4 Mar 2018

Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory.

Autoencoding beyond pixels using a learned similarity metric

clementchadebec/benchmark_VAE 31 Dec 2015

We present an autoencoder that leverages learned representations to better measure similarities in data space.

Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

NVIDIA/DeepLearningExamples 26 Sep 2016

To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder.

U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation

taki0112/UGATIT ICLR 2020

We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner.

Neural Machine Translation of Rare Words with Subword Units

rsennrich/subword-nmt ACL 2016

Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem.