Browse > Methodology > Representation Learning

Representation Learning

181 papers with code · Methodology

Representation learning is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Near-Optimal Representation Learning for Hierarchical Reinforcement Learning

ICLR 2019 tensorflow/models

We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning. In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach.

CONTINUOUS CONTROL HIERARCHICAL REINFORCEMENT LEARNING REPRESENTATION LEARNING

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

NeurIPS 2016 tensorflow/models

This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation.

IMAGE GENERATION REPRESENTATION LEARNING

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

19 Nov 2015tensorflow/models

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention.

CONDITIONAL IMAGE GENERATION UNSUPERVISED REPRESENTATION LEARNING

Semi-Supervised Sequence Modeling with Cross-View Training

EMNLP 2018 tensorflow/models

We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data. On unlabeled examples, CVT teaches auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) to match the predictions of the full model seeing the whole input.

CCG SUPERTAGGING DEPENDENCY PARSING MACHINE TRANSLATION MULTI-TASK LEARNING NAMED ENTITY RECOGNITION UNSUPERVISED REPRESENTATION LEARNING

Tencent ML-Images: A Large-Scale Multi-Label Image Database for Visual Representation Learning

7 Jan 2019Tencent/tencent-ml-images

In existing visual representation learning tasks, deep convolutional neural networks (CNNs) are often trained on images annotated with single tags, such as ImageNet. In this work, we propose to train CNNs from images annotated with multiple tags, to enhance the quality of visual representation of the trained CNN model.

IMAGE CLASSIFICATION OBJECT DETECTION REPRESENTATION LEARNING SEMANTIC SEGMENTATION TRANSFER LEARNING

Revisiting Unreasonable Effectiveness of Data in Deep Learning Era

ICCV 2017 Tencent/tencent-ml-images

The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. What will happen if we increase the dataset size by 10x or 100x?

IMAGE CLASSIFICATION OBJECT DETECTION POSE ESTIMATION REPRESENTATION LEARNING SEMANTIC SEGMENTATION

DisSent: Sentence Representation Learning from Explicit Discourse Relations

12 Oct 2017facebookresearch/InferSent

Sentence vectors represent an appealing approach to meaning: learn an embedding that encompasses the meaning of a sentence in a single vector, that can be used for a variety of semantic tasks. Existing models for learning sentence embeddings either require extensive computational resources to train on large corpora, or are trained on costly, manually curated datasets of sentence relations.

SENTENCE EMBEDDINGS

Poincaré Embeddings for Learning Hierarchical Representations

NeurIPS 2017 facebookresearch/poincare-embeddings

Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically learn embeddings in Euclidean vector spaces, which do not account for this property.

GRAPH EMBEDDING

NiftyNet: a deep-learning platform for medical imaging

11 Sep 2017NifTK/NiftyNet

NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.

IMAGE GENERATION MEDICAL IMAGE GENERATION REPRESENTATION LEARNING

Inductive Representation Learning on Large Graphs

NeurIPS 2017 williamleif/GraphSAGE

Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes.

NODE CLASSIFICATION REPRESENTATION LEARNING