Browse > Methodology > Representation Learning > Unsupervised Representation Learning

Unsupervised Representation Learning

26 papers with code · Methodology

Leaderboards

You can find evaluation results in the subtasks. You can also submitting evaluation metrics for this task.

Greatest papers with code

Meta-Learning Update Rules for Unsupervised Representation Learning

ICLR 2019 tensorflow/models

Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.

META-LEARNING UNSUPERVISED REPRESENTATION LEARNING

Semi-Supervised Sequence Modeling with Cross-View Training

EMNLP 2018 tensorflow/models

We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.

CCG SUPERTAGGING DEPENDENCY PARSING MACHINE TRANSLATION MULTI-TASK LEARNING NAMED ENTITY RECOGNITION UNSUPERVISED REPRESENTATION LEARNING

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

19 Nov 2015tensorflow/models

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.

CONDITIONAL IMAGE GENERATION UNSUPERVISED REPRESENTATION LEARNING

TabNet: Attentive Interpretable Tabular Learning

20 Aug 2019google-research/google-research

We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet.

DECISION MAKING FEATURE SELECTION UNSUPERVISED REPRESENTATION LEARNING

Visual Reinforcement Learning with Imagined Goals

NeurIPS 2018 vitchyr/rlkit

For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires.

UNSUPERVISED REPRESENTATION LEARNING

Continual Unsupervised Representation Learning

NeurIPS 2019 deepmind/deepmind-research

Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.

CONTINUAL LEARNING OMNIGLOT UNSUPERVISED REPRESENTATION LEARNING

Unsupervised Representation Learning by Predicting Image Rotations

ICLR 2018 gidariss/FeatureLearningRotNet

However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale.

UNSUPERVISED REPRESENTATION LEARNING

Spectral Inference Networks: Unifying Deep and Spectral Learning

ICLR 2019 deepmind/spectral_inference_networks

We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization.

ATARI GAMES STOCHASTIC OPTIMIZATION UNSUPERVISED REPRESENTATION LEARNING

Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction

CVPR 2017 richzhang/splitbrainauto

We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning.

TRANSFER LEARNING UNSUPERVISED REPRESENTATION LEARNING

AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than Data

14 Jan 2019maple-research-lab/AET

The success of deep neural networks often relies on a large amount of labeled examples, which can be difficult to obtain in many real scenarios.

UNSUPERVISED REPRESENTATION LEARNING