Representation learning is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.
SOTA for CCG Supertagging on CCGBank
We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning.
Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.
#5 best model for Unsupervised MNIST on MNIST
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.
#8 best model for Conditional Image Generation on CIFAR-10
We introduce PyTorch Geometric, a library for deep learning on irregularly structured input data such as graphs, point clouds and manifolds, built upon PyTorch.
#2 best model for Graph Classification on REDDIT-B
High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection.
SOTA for Semantic Segmentation on Cityscapes test (using extra training data)
The proposed approach achieves superior results to existing single-model networks on COCO object detection.
#2 best model for Semantic Segmentation on LIP val