Representation Learning

3680 papers with code • 5 benchmarks • 9 datasets

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Libraries

Use these libraries to find Representation Learning models and implementations

Latest papers with no code

DACAD: Domain Adaptation Contrastive Learning for Anomaly Detection in Multivariate Time Series

no code yet • 17 Apr 2024

In this paper, we propose a novel Domain Adaptation Contrastive learning for Anomaly Detection in multivariate time series (DACAD) model to address this issue by combining UDA and contrastive representation learning.

Leveraging Fine-Grained Information and Noise Decoupling for Remote Sensing Change Detection

no code yet • 17 Apr 2024

Next, a shape-aware and a brightness-aware module are designed to improve the capacity for representation learning.

CORE: Data Augmentation for Link Prediction via Information Bottleneck

no code yet • 17 Apr 2024

Link prediction (LP) is a fundamental task in graph representation learning, with numerous applications in diverse domains.

A Novel ICD Coding Framework Based on Associated and Hierarchical Code Description Distillation

no code yet • 17 Apr 2024

To address these problems, we propose a novel framework based on associated and hierarchical code description distillation (AHDD) for better code representation learning and avoidance of improper code assignment. we utilize the code description and the hierarchical structure inherent to the ICD codes.

DRepMRec: A Dual Representation Learning Framework for Multimodal Recommendation

no code yet • 17 Apr 2024

After obtaining separate behavior and modal representations, we design a Behavior-Modal Alignment Module (BMA) to align and fuse the dual representations to solve the misalignment problem.

AGHINT: Attribute-Guided Representation Learning on Heterogeneous Information Networks with Transformer

no code yet • 16 Apr 2024

Recently, heterogeneous graph neural networks (HGNNs) have achieved impressive success in representation learning by capturing long-range dependencies and heterogeneity at the node level.

Tripod: Three Complementary Inductive Biases for Disentangled Representation Learning

no code yet • 16 Apr 2024

Inductive biases are crucial in disentangled representation learning for narrowing down an underspecified solution set.

HiGraphDTI: Hierarchical Graph Representation Learning for Drug-Target Interaction Prediction

no code yet • 16 Apr 2024

Specifically, HiGraphDTI learns hierarchical drug representations from triple-level molecular graphs to thoroughly exploit chemical information embedded in atoms, motifs, and molecules.

Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning

no code yet • 16 Apr 2024

Our methodology streamlines pre-trained multimodal large models using only their output features and original image-level information, requiring minimal computational resources.

Utility-Fairness Trade-Offs and How to Find Them

no code yet • 15 Apr 2024

and 2) How can we numerically quantify these trade-offs from data for a desired prediction task and demographic attribute of interest?