Representation Learning
2148 papers with code • 5 benchmarks • 5 datasets
Representation learning is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.
Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.
Representation learning can be divided into:
- Supervised representation learning: learning representations on task A using annotated data and used to solve task B
- Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.
More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.
Here are some additional readings to go deeper on the task:
- Representation Learning: A Review and New Perspectives - Bengio et al. (2012)
- A Few Words on Representation Learning - Thalles Silva
( Image credit: Visualizing and Understanding Convolutional Networks )
Libraries
Use these libraries to find Representation Learning models and implementationsSubtasks
-
Word Embeddings
-
Disentanglement
-
Graph Embedding
-
Graph Representation Learning
-
Graph Representation Learning
-
Sentence Embeddings
-
Knowledge Graph Embedding
-
Network Embedding
-
Sentence Embedding
-
Knowledge Graph Embeddings
-
Document Embedding
-
Learning Word Embeddings
-
Multilingual Word Embeddings
-
Learning Semantic Representations
-
Learning Network Representations
-
Sentence Embeddings For Biomedical Texts
-
Learning Representation Of Multi-View Data
-
Learning Representation On Graph
Most implemented papers
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.
Neural Discrete Representation Learning
Learning useful representations without supervision remains a key challenge in machine learning.
High-Resolution Representations for Labeling Pixels and Regions
The proposed approach achieves superior results to existing single-model networks on COCO object detection.
Momentum Contrast for Unsupervised Visual Representation Learning
This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.
Deep High-Resolution Representation Learning for Human Pose Estimation
We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworks in parallel.
Deep High-Resolution Representation Learning for Visual Recognition
High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection.
Domain-Adversarial Training of Neural Networks
Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.
Improved Baselines with Momentum Contrastive Learning
Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.
Bootstrap your own latent: A new approach to self-supervised Learning
From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.