Browse > Methodology > Representation Learning > Unsupervised Representation Learning

Unsupervised Representation Learning

9 papers with code · Methodology

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

19 Nov 2015tensorflow/models

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention.

CONDITIONAL IMAGE GENERATION UNSUPERVISED REPRESENTATION LEARNING

Semi-Supervised Sequence Modeling with Cross-View Training

EMNLP 2018 tensorflow/models

We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data. On unlabeled examples, CVT teaches auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) to match the predictions of the full model seeing the whole input.

CCG SUPERTAGGING DEPENDENCY PARSING MACHINE TRANSLATION MULTI-TASK LEARNING NAMED ENTITY RECOGNITION UNSUPERVISED REPRESENTATION LEARNING

Visual Reinforcement Learning with Imagined Goals

NeurIPS 2018 vitchyr/rlkit

For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images.

UNSUPERVISED REPRESENTATION LEARNING

Unsupervised Representation Learning by Predicting Image Rotations

ICLR 2018 gidariss/FeatureLearningRotNet

However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input.

UNSUPERVISED REPRESENTATION LEARNING

Learning Dynamics of Linear Denoising Autoencoders

ICML 2018 arnupretorius/lindaedynamics_icml2018

Here we develop theory for how noise influences learning in DAEs. We also show that our theoretical predictions approximate learning dynamics on real-world data and qualitatively match observed dynamics in nonlinear DAEs.

DENOISING UNSUPERVISED REPRESENTATION LEARNING

Learning Distributed Representations of Sentences from Unlabelled Data

HLT 2016 jihunchoi/sequential-denoising-autoencoder-tf

Unsupervised methods for learning distributed representations of words are ubiquitous in today's NLP research, but far less is known about the best ways to learn distributed phrase or sentence representations from unlabelled data. This paper is a systematic comparison of models that learn such representations.

UNSUPERVISED REPRESENTATION LEARNING

Sampling strategies in Siamese Networks for unsupervised speech representation learning

30 Apr 2018bootphon/abnet3

Recent studies have investigated siamese network architectures for learning invariant speech representations using same-different side information at the word level. We apply these results to pairs of words discovered using an unsupervised algorithm and show an improvement on state-of-the-art in unsupervised representation learning using siamese networks.

UNSUPERVISED REPRESENTATION LEARNING

Efficient Representation Learning Using Random Walks for Dynamic Graphs

5 Jan 2019shps/incremental-representation-learning

In this work, we propose computationally efficient algorithms for vertex representation learning that extend random walk based methods to dynamic graphs. We empirically evaluate our algorithms on real world datasets for downstream machine learning tasks of multi-class and multi-label vertex classification.

UNSUPERVISED REPRESENTATION LEARNING

Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction

CVPR 2017 ysharma1126/Split-Brain-Autoencoder

We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks.

TRANSFER LEARNING UNSUPERVISED REPRESENTATION LEARNING