About

Benchmarks

You can find evaluation results in the subtasks. You can also submitting evaluation metrics for this task.

Subtasks

Datasets

Greatest papers with code

Semi-Supervised Sequence Modeling with Cross-View Training

EMNLP 2018 tensorflow/models

We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.

CCG SUPERTAGGING DEPENDENCY PARSING MACHINE TRANSLATION MULTI-TASK LEARNING NAMED ENTITY RECOGNITION PART-OF-SPEECH TAGGING UNSUPERVISED REPRESENTATION LEARNING

Meta-Learning Update Rules for Unsupervised Representation Learning

ICLR 2019 tensorflow/models

Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.

META-LEARNING UNSUPERVISED REPRESENTATION LEARNING

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

19 Nov 2015tensorflow/models

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.

CONDITIONAL IMAGE GENERATION IMAGE CLUSTERING UNSUPERVISED REPRESENTATION LEARNING

Learning and Evaluating Contextual Embedding of Source Code

ICML 2020 google-research/google-research

We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples.

CONTEXTUAL EMBEDDING FOR SOURCE CODE EXCEPTION TYPE FUNCTION-DOCSTRING MISMATCH NATURAL LANGUAGE UNDERSTANDING SWAPPED OPERANDS UNSUPERVISED REPRESENTATION LEARNING VARIABLE MISUSE WRONG BINARY OPERATOR

TabNet: Attentive Interpretable Tabular Learning

20 Aug 2019google-research/google-research

We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet.

DECISION MAKING FEATURE SELECTION SELF-SUPERVISED LEARNING UNSUPERVISED REPRESENTATION LEARNING

Continual Unsupervised Representation Learning

NeurIPS 2019 deepmind/deepmind-research

Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.

CONTINUAL LEARNING OMNIGLOT UNSUPERVISED REPRESENTATION LEARNING

Viewmaker Networks: Learning Views for Unsupervised Representation Learning

ICLR 2021 makcedward/nlpaug

However, designing these views requires considerable trial and error by human experts, hindering widespread adoption of unsupervised representation learning methods across domains and modalities.

UNSUPERVISED REPRESENTATION LEARNING

Visual Reinforcement Learning with Imagined Goals

NeurIPS 2018 vitchyr/rlkit

For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires.

UNSUPERVISED REPRESENTATION LEARNING

Unsupervised Representation Learning by Predicting Image Rotations

ICLR 2018 facebookresearch/vissl

However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale.

SELF-SUPERVISED IMAGE CLASSIFICATION UNSUPERVISED REPRESENTATION LEARNING

Generative Pretraining from Pixels

ICML 2020 openai/image-gpt

Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images.

Ranked #10 on Image Classification on STL-10 (using extra training data)

SELF-SUPERVISED IMAGE CLASSIFICATION UNSUPERVISED REPRESENTATION LEARNING