1919 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?


Greatest papers with code

Attention Is All You Need

tensorflow/models NeurIPS 2017

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.

Ranked #2 on Multimodal Machine Translation on Multi30K (BLUE (DE-EN) metric)

Abstractive Text Summarization Constituency Parsing +2

Semi-Supervised Sequence Modeling with Cross-View Training

tensorflow/models EMNLP 2018

We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.

CCG Supertagging Dependency Parsing +6

Can Active Memory Replace Attention?

tensorflow/models NeurIPS 2016

Several mechanisms to focus attention of a neural network on selected parts of its input or memory have been used successfully in deep learning models in recent years.

Image Captioning Machine Translation +1

Exploiting Similarities among Languages for Machine Translation

tensorflow/models 17 Sep 2013

Dictionaries and phrase tables are the basis of modern statistical machine translation systems.

Machine Translation Translation

Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning Challenge

tensorflow/models 21 Sep 2016

Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing.

Image Captioning Translation

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

tensorflow/models ICCV 2017

Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.

 Ranked #1 on Image-to-Image Translation on photo2vangogh (Frechet Inception Distance metric)

Multimodal Unsupervised Image-To-Image Translation Style Transfer +2

mT5: A massively multilingual pre-trained text-to-text transformer

huggingface/transformers NAACL 2021

The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks.

Common Sense Reasoning Natural Language Inference +3

Beyond English-Centric Multilingual Machine Translation

huggingface/transformers 21 Oct 2020

Existing work in translation demonstrated the potential of massively multilingual machine translation by training a single model able to translate between any pair of languages.

Machine Translation Translation