Several mechanisms to focus attention of a neural network on selected parts of its input or memory have been used successfully in deep learning models in recent years.
#27 best model for Machine Translation on WMT2014 English-French
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.
#6 best model for Conditional Image Generation on CIFAR-10
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.
#5 best model for Unsupervised MNIST on MNIST
We study the problem of synthesizing a number of likely future frames from a single input image.
A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment.
In contrast to previous region-based detectors such as Fast/Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image.
#5 best model for Real-Time Object Detection on PASCAL VOC 2007
The move from hand-designed features to learned features in machine learning has been wildly successful.
In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs.