Browse > Methodology > Representation Learning

Representation Learning

501 papers with code · Methodology

Representation learning is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.

( Image credit: Visualizing and Understanding Convolutional Networks )

Leaderboards

Greatest papers with code

Meta-Learning Update Rules for Unsupervised Representation Learning

ICLR 2019 tensorflow/models

Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.

META-LEARNING UNSUPERVISED REPRESENTATION LEARNING

Semi-Supervised Sequence Modeling with Cross-View Training

EMNLP 2018 tensorflow/models

We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.

CCG SUPERTAGGING DEPENDENCY PARSING MACHINE TRANSLATION MULTI-TASK LEARNING NAMED ENTITY RECOGNITION UNSUPERVISED REPRESENTATION LEARNING

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

NeurIPS 2016 tensorflow/models

This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.

IMAGE GENERATION REPRESENTATION LEARNING UNSUPERVISED IMAGE CLASSIFICATION UNSUPERVISED MNIST

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

19 Nov 2015tensorflow/models

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.

CONDITIONAL IMAGE GENERATION UNSUPERVISED REPRESENTATION LEARNING

Unsupervised Cross-lingual Representation Learning at Scale

5 Nov 2019huggingface/transformers

We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale.

CROSS-LINGUAL TRANSFER LANGUAGE MODELLING REPRESENTATION LEARNING

Temporal Cycle-Consistency Learning

CVPR 2019 google-research/google-research

We introduce a self-supervised representation learning method based on the task of temporal alignment between videos.

ANOMALY DETECTION REPRESENTATION LEARNING VIDEO ALIGNMENT

TabNet: Attentive Interpretable Tabular Learning

20 Aug 2019google-research/google-research

We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet.

DECISION MAKING FEATURE SELECTION UNSUPERVISED REPRESENTATION LEARNING

On Mutual Information Maximization for Representation Learning

ICLR 2020 google-research/google-research

Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION

Practical and Consistent Estimation of f-Divergences

NeurIPS 2019 google-research/google-research

The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning.

REPRESENTATION LEARNING