NeurIPS 2017

LightGBM: A Highly Efficient Gradient Boosting Decision Tree

NeurIPS 2017 Microsoft/LightGBM

We prove that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size.

A Unified Approach to Interpreting Model Predictions

NeurIPS 2017 slundberg/shap

Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.

INTERPRETABLE MACHINE LEARNING

ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games

NeurIPS 2017 facebookresearch/ELF

In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like Arcade Learning Environment.

ATARI GAMES STARCRAFT

Self-Normalizing Neural Networks

NeurIPS 2017 bioinf-jku/SNNs

We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations.

DRUG DISCOVERY PULSAR PREDICTION

Bayesian GAN

NeurIPS 2017 andrewgordonwilson/bayesgan

Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood.

PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space

NeurIPS 2017 charlesq34/pointnet2

By exploiting metric space distances, our network is able to learn local features with increasing contextual scales.

SEMANTIC SEGMENTATION

Toward Multimodal Image-to-Image Translation

NeurIPS 2017 junyanz/BicycleGAN

Our proposed method encourages bijective consistency between the latent encoding and output modes.

IMAGE-TO-IMAGE TRANSLATION

Inductive Representation Learning on Large Graphs

NeurIPS 2017 williamleif/GraphSAGE

Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions.

NODE CLASSIFICATION REPRESENTATION LEARNING

Learning Disentangled Representations with Semi-Supervised Deep Generative Models

NeurIPS 2017 probtorch/probtorch

We propose to learn such representations using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder.

Dynamic Routing Between Capsules

NeurIPS 2017 timomernick/pytorch-capsule

We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters.

IMAGE CLASSIFICATION