ICLR 2020

Plug and Play Language Models: A Simple Approach to Controlled Text Generation

ICLR 2020 huggingface/transformers

Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities.

LANGUAGE MODELLING TEXT GENERATION

Reformer: The Efficient Transformer

ICLR 2020 huggingface/transformers

Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences.

LANGUAGE MODELLING

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

ICLR 2020 huggingface/transformers

Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not.

LANGUAGE MODELLING NATURAL LANGUAGE UNDERSTANDING

Well-Read Students Learn Better: On the Importance of Pre-training Compact Models

ICLR 2020 google-research/bert

Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.

LANGUAGE MODELLING MODEL COMPRESSION SENTIMENT ANALYSIS

ProtoAttend: Attention-Based Prototypical Learning

ICLR 2020 google-research/google-research

We propose a novel inherently interpretable machine learning method that bases decisions on few relevant examples that we call prototypes.

DECISION MAKING INTERPRETABLE MACHINE LEARNING

Weakly Supervised Disentanglement with Guarantees

ICLR 2020 google-research/google-research

Learning disentangled representations that correspond to factors of variation in real-world data is critical to interpretable and human-controllable machine learning.

Meta-Learning without Memorization

ICLR 2020 google-research/google-research

If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes.

FEW-SHOT IMAGE CLASSIFICATION

ProtoAttend: Attention-Based Prototypical Learning

ICLR 2020 google-research/google-research

We propose a novel inherently interpretable machine learning method that bases decisions on few relevant examples that we call prototypes.

DECISION MAKING INTERPRETABLE MACHINE LEARNING