Representation Learning

3672 papers with code • 5 benchmarks • 9 datasets

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Libraries

Use these libraries to find Representation Learning models and implementations

Universal representations for financial transactional data: embracing local, global, and external contexts

romanenkova95/transactions_gen_models 2 Apr 2024

Effective processing of financial transactions is essential for banking data analysis.

2
02 Apr 2024

ContrastCAD: Contrastive Learning-based Representation Learning for Computer-Aided Design Models

cm8908/contrastcad 2 Apr 2024

However, learning CAD models is still a challenge, because they can be represented as complex shapes with long construction sequences.

1
02 Apr 2024

HypeBoy: Generative Self-Supervised Representation Learning on Hypergraphs

kswoo97/hypeboy 31 Mar 2024

Based on the generative SSL task, we propose a hypergraph SSL method, HypeBoy.

9
31 Mar 2024

Addressing Loss of Plasticity and Catastrophic Forgetting in Continual Learning

mohmdelsayed/upgd 31 Mar 2024

Deep representation learning methods struggle with continual learning, suffering from both catastrophic forgetting of useful units and loss of plasticity, often due to rigid and unuseful units.

3
31 Mar 2024

GeoAuxNet: Towards Universal 3D Representation Learning for Multi-sensor Point Clouds

zhangshengjun2019/geoauxnet 28 Mar 2024

In this paper, we propose geometry-to-voxel auxiliary learning to enable voxel representations to access point-level geometric information, which supports better generalisation of the voxel-based backbone with additional interpretations of multi-sensor point clouds.

5
28 Mar 2024

MPXGAT: An Attention based Deep Learning Model for Multiplex Graphs Embedding

marcob46/mpxgat 28 Mar 2024

Graph representation learning has rapidly emerged as a pivotal field of study.

0
28 Mar 2024

Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models

lavi-lab/visual-table 27 Mar 2024

When visual tables serve as standalone visual representations, our model can closely match or even beat the SOTA MLLMs that are built on CLIP visual embeddings.

7
27 Mar 2024

Neural Clustering based Visual Representation Learning

guikunchen/fec 26 Mar 2024

In this work, we propose feature extraction with clustering (FEC), a conceptually elegant yet surprisingly ad-hoc interpretable neural clustering framework, which views feature extraction as a process of selecting representatives from data and thus automatically captures the underlying data distribution.

4
26 Mar 2024

Grad-CAMO: Learning Interpretable Single-Cell Morphological Profiles from 3D Cell Painting Images

eigenvivek/grad-camo 26 Mar 2024

Despite their black-box nature, deep learning models are extensively used in image-based drug discovery to extract feature vectors from single cells in microscopy images.

4
26 Mar 2024

HILL: Hierarchy-aware Information Lossless Contrastive Learning for Hierarchical Text Classification

rooooyy/hill 26 Mar 2024

Existing self-supervised methods in natural language processing (NLP), especially hierarchical text classification (HTC), mainly focus on self-supervised contrastive learning, extremely relying on human-designed augmentation rules to generate contrastive samples, which can potentially corrupt or distort the original information.

1
26 Mar 2024