Representation Learning

1289 papers with code • 2 benchmarks • 1 datasets

Representation learning is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.

( Image credit: Visualizing and Understanding Convolutional Networks )

Datasets


Greatest papers with code

Semi-Supervised Sequence Modeling with Cross-View Training

tensorflow/models EMNLP 2018

We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.

CCG Supertagging Dependency Parsing +5

Meta-Learning Update Rules for Unsupervised Representation Learning

tensorflow/models ICLR 2019

Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.

Meta-Learning Unsupervised Representation Learning

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

tensorflow/models 19 Nov 2015

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.

Conditional Image Generation Image Clustering +1

Unsupervised Cross-lingual Representation Learning for Speech Recognition

huggingface/transformers 24 Jun 2020

This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages.

Quantization Representation Learning +1

Unsupervised Cross-lingual Representation Learning at Scale

huggingface/transformers ACL 2020

We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale.

Cross-Lingual Transfer Language Modelling +2

Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning

google-research/google-research ICLR 2021

Specifically, we introduce a theoretically motivated policy similarity metric (PSM) for measuring behavioral similarity between states.

Representation Learning

Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching

google-research/google-research 26 Apr 2020

In order to better capture sentence level semantic relations within a document, we pre-train the model with a novel masked sentence block language modeling task in addition to the masked word language modeling task used by BERT.

Information Retrieval Language Modelling +4

Supervised Contrastive Learning

google-research/google-research NeurIPS 2020

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.

Contrastive Learning Data Augmentation +3

On Mutual Information Maximization for Representation Learning

google-research/google-research ICLR 2020

Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data.

Representation Learning Self-Supervised Image Classification