Representation learning is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.
( Image credit: Visualizing and Understanding Convolutional Networks )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.
#5 best model for Unsupervised MNIST on MNIST
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.
#9 best model for Conditional Image Generation on CIFAR-10
We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.
SOTA for CCG Supertagging on CCGBank
We propose a dynamic neighborhood aggregation (DNA) procedure guided by (multi-head) attention for representation learning on graphs.
#6 best model for Node Classification on Citeseer
We introduce PyTorch Geometric, a library for deep learning on irregularly structured input data such as graphs, point clouds and manifolds, built upon PyTorch.
#2 best model for Graph Classification on REDDIT-B
High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection.
#3 best model for Semantic Segmentation on PASCAL Context
The proposed approach achieves superior results to existing single-model networks on COCO object detection.
#3 best model for Semantic Segmentation on LIP val