Trending Research

Torchmeta: A Meta-Learning library for PyTorch

14 Sep 2019tristandeleu/pytorch-meta

The constant introduction of standardized benchmarks in the literature has helped accelerating the recent advances in meta-learning research.

META-LEARNING

130
3.95 stars / hour

DeepPrivacy: A Generative Adversarial Network for Face Anonymization

10 Sep 2019hukkelas/DeepPrivacy

Our model is based on a conditional generative adversarial network, generating images considering the original pose and image background.

 SOTA for Face Anonymization on 2019_test set (using extra training data)

FACE ANONYMIZATION

512
2.94 stars / hour

CTRL: A Conditional Transformer Language Model for Controllable Generation

Preprint 2019 salesforce/ctrl

Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text.

LANGUAGE MODELLING TEXT GENERATION

786
2.52 stars / hour

Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data

13 Sep 2019Qwicen/node

In this paper, we introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data.

REPRESENTATION LEARNING

100
2.15 stars / hour

Generating Sequences With Recurrent Neural Networks

4 Aug 2013swechhachoudhary/Handwriting-synthesis

This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time.

TEXT GENERATION

85
1.22 stars / hour

CBNet: A Novel Composite Backbone Network Architecture for Object Detection

9 Sep 2019PKUbahuangliuhe/CBNet

In existing CNN based detectors, the backbone network is a very important component for basic feature extraction, and the performance of the detectors highly depends on it.

INSTANCE SEGMENTATION OBJECT DETECTION SEMANTIC SEGMENTATION

131
0.99 stars / hour

Neural Text Generation with Unlikelihood Training

12 Aug 2019facebookresearch/unlikelihood_training

In this paper we show that the likelihood objective itself is at fault, resulting in a model that assigns too much probability to sequences that contain repeats and frequent words unlike the human training distribution.

TEXT GENERATION

79
0.97 stars / hour