NeurIPS 2017

LightGBM: A Highly Efficient Gradient Boosting Decision Tree

NeurIPS 2017 Microsoft/LightGBM

We prove that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size.

A Unified Approach to Interpreting Model Predictions

NeurIPS 2017 slundberg/shap

Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.

FEATURE IMPORTANCE INTERPRETABLE MACHINE LEARNING

ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games

NeurIPS 2017 facebookresearch/ELF

In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like Arcade Learning Environment.

ATARI GAMES STARCRAFT

Bayesian GAN

NeurIPS 2017 andrewgordonwilson/bayesgan

Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood.

Toward Multimodal Image-to-Image Translation

NeurIPS 2017 junyanz/BicycleGAN

Our proposed method encourages bijective consistency between the latent encoding and output modes.

IMAGE-TO-IMAGE TRANSLATION

Inductive Representation Learning on Large Graphs

NeurIPS 2017 williamleif/GraphSAGE

Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions.

LINK PREDICTION NODE CLASSIFICATION REPRESENTATION LEARNING

Learning Disentangled Representations with Semi-Supervised Deep Generative Models

NeurIPS 2017 probtorch/probtorch

We propose to learn such representations using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder.

SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability

NeurIPS 2017 google/svcca

We propose a new technique, Singular Vector Canonical Correlation Analysis (SVCCA), a tool for quickly comparing two representations in a way that is both invariant to affine transform (allowing comparison between different layers and networks) and fast to compute (allowing more comparisons to be calculated than with previous methods).

Dilated Recurrent Neural Networks

NeurIPS 2017 code-terminator/DilatedRNN

To provide a theory-based quantification of the architecture's advantages, we introduce a memory capacity measure, the mean recurrent length, which is more suitable for RNNs with long skip connections than existing measures.

SEQUENTIAL IMAGE CLASSIFICATION

The Reversible Residual Network: Backpropagation Without Storing Activations

NeurIPS 2017 renmengye/revnet-public

Deep residual networks (ResNets) have significantly pushed forward the state-of-the-art on image classification, increasing in performance as networks grow both deeper and wider.

IMAGE CLASSIFICATION