Translation

3145 papers with code • 7 benchmarks • 15 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Translation models and implementations
25 papers
1,200
18 papers
28,938
13 papers
122,310
See all 25 libraries.

Most implemented papers

Attention Is All You Need

tensorflow/tensor2tensor NeurIPS 2017

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

junyanz/pytorch-CycleGAN-and-pix2pix ICCV 2017

Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.

Image-to-Image Translation with Conditional Adversarial Networks

phillipi/pix2pix CVPR 2017

We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems.

Neural Machine Translation by Jointly Learning to Align and Translate

graykode/nlp-tutorial 1 Sep 2014

Neural machine translation is a recently proposed approach to machine translation.

Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning 10 Feb 2015

Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images.

Show and Tell: A Neural Image Caption Generator

karpathy/neuraltalk CVPR 2015

Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions.

YOLACT: Real-time Instance Segmentation

dbolya/yolact ICCV 2019

Then we produce instance masks by linearly combining the prototypes with the mask coefficients.

R-FCN: Object Detection via Region-based Fully Convolutional Networks

daijifeng001/r-fcn NeurIPS 2016

In contrast to previous region-based detectors such as Fast/Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image.

Effective Approaches to Attention-based Neural Machine Translation

philipperemy/keras-attention-mechanism EMNLP 2015

Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25. 9 BLEU points, an improvement of 1. 0 BLEU points over the existing best system backed by NMT and an n-gram reranker.

Regularizing and Optimizing LSTM Language Models

salesforce/awd-lstm-lm ICLR 2018

Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering.