Representation learning is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.
( Image credit: Visualizing and Understanding Convolutional Networks )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Similarity learning has gained a lot of attention from researches in recent years and tons of successful approaches have been recently proposed.
In this study, Proximal Policy Optimization (PPO) algorithm is augmented with Generative Adversarial Networks (GANs) to increase the sample efficiency by enforcing the network to learn efficient representations without depending on sparse and delayed rewards as supervision.
We demonstrate our method on the challenging task of learning representations for video face clustering.
We present a lightweight solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
In this paper, we propose FairNN a neural network that performs joint feature representation and classification for fairness-aware learning.
To complicate matters further, supervised learning models may not generalize well on a novel dataset due to domain shift.
In this paper, motivated by the inherent connections between neural joint source-channel coding and discrete representation learning, we propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.
In this paper, we study a new representation-learning task, which we termed as disassembling object representations.