NeurIPS 2015

Teaching Machines to Read and Comprehend

NeurIPS 2015 facebookresearch/ParlAI

Teaching machines to read natural language documents remains an elusive challenge.

READING COMPREHENSION

Learning to Segment Object Candidates

NeurIPS 2015 facebookresearch/deepmask

Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier.

OBJECT DETECTION

End-To-End Memory Networks

NeurIPS 2015 facebook/bAbI-tasks

For the former our approach is competitive with Memory Networks, but with less supervision.

LANGUAGE MODELLING QUESTION ANSWERING

Training Very Deep Networks

NeurIPS 2015 LiyuanLucasLiu/LM-LSTM-CRF

Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success.

IMAGE CLASSIFICATION

Deep learning with Elastic Averaging SGD

NeurIPS 2015 JoeriHermans/dist-keras

We empirically demonstrate that in the deep learning setting, due to the existence of many local optima, allowing more exploration can lead to the improved performance.

IMAGE CLASSIFICATION STOCHASTIC OPTIMIZATION

Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets

NeurIPS 2015 yandexdataschool/AgentNet

Despite the recent achievements in machine learning, we are still very far from achieving real artificial intelligence.

BinaryConnect: Training Deep Neural Networks with binary weights during propagations

NeurIPS 2015 MatthieuCourbariaux/BinaryConnect

We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated.

A Recurrent Latent Variable Model for Sequential Data

NeurIPS 2015 jych/nips2015_vrnn

In this paper, we explore the inclusion of latent random variables into the dynamic hidden state of a recurrent neural network (RNN) by combining elements of the variational autoencoder.

Deep Convolutional Inverse Graphics Network

NeurIPS 2015 willwhitney/dc-ign

This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that learns an interpretable representation of images.

Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks

NeurIPS 2015 Chung-I/Variational-Recurrent-Autoencoder-Tensorflow

Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning.

CONSTITUENCY PARSING IMAGE CAPTIONING SPEECH RECOGNITION