Self-Supervised Learning

1688 papers with code • 10 benchmarks • 41 datasets

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Libraries

Use these libraries to find Self-Supervised Learning models and implementations
14 papers
2,716
11 papers
1,347
7 papers
3,062
See all 9 libraries.

TFPred: Learning Discriminative Representations from Unlabeled Data for Few-Label Rotating Machinery Fault Diagnosis

Xiaohan-Chen/TFPred Control Engineering Practice 2024

Recent advances in intelligent rotating machinery fault diagnosis have been enabled by the availability of massive labeled training data.

14
01 May 2024

Efficient Image Pre-Training with Siamese Cropped Masked Autoencoders

alexandre-eymael/cropmae 26 Mar 2024

In particular, SiamMAE recently introduced a Siamese network, training a shared-weight encoder from two frames of a video with a high asymmetric masking ratio (95%).

5
26 Mar 2024

A Survey on Self-Supervised Pre-Training of Graph Foundation Models: A Knowledge-Based Perspective

newiz430/pretext 24 Mar 2024

Graph self-supervised learning is now a go-to method for pre-training graph foundation models, including graph neural networks, graph transformers, and more recent large language model (LLM)-based graph models.

2
24 Mar 2024

Pose-Guided Self-Training with Two-Stage Clustering for Unsupervised Landmark Discovery

skt9/pose-proxy-uld 24 Mar 2024

Second, motivated by the ZeroShot performance, we develop a ULD algorithm based on diffusion features using self-training and clustering which also outperforms prior methods by notable margins.

1
24 Mar 2024

An Embarrassingly Simple Defense Against Backdoor Attacks On SSL

aryan-satpathy/backdoor 23 Mar 2024

Using object classification as the downstream task for SSL, we demonstrate successful defense strategies that do not require re-training of the model.

2
23 Mar 2024

Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning

hasindri/hlss 21 Mar 2024

Self-supervised representation learning has been highly promising for histopathology image analysis with numerous approaches leveraging their patient-slide-patch hierarchy to learn better representations.

5
21 Mar 2024

MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining

vitae-transformer/mtp 20 Mar 2024

However, transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.

33
20 Mar 2024

On Pretraining Data Diversity for Self-Supervised Learning

hammoudhasan/diversityssl 20 Mar 2024

We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget.

5
20 Mar 2024

Diffusion-Driven Self-Supervised Learning for Shape Reconstruction and Pose Estimation

s-jingtao/self-srpe 19 Mar 2024

Furthermore, we introduce a pretrain-to-refine self-supervised training paradigm to train our network.

32
19 Mar 2024

Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs

ashiq24/coda-no 19 Mar 2024

On complex downstream tasks with limited data, such as fluid flow simulations and fluid-structure interactions, we found CoDA-NO to outperform existing methods on the few-shot learning task by over $36\%$.

6
19 Mar 2024