Self-Supervised Learning

1719 papers with code • 10 benchmarks • 41 datasets

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Libraries

Use these libraries to find Self-Supervised Learning models and implementations
14 papers
2,740
11 papers
1,355
See all 10 libraries.

Latest papers with no code

Integration of Self-Supervised BYOL in Semi-Supervised Medical Image Recognition

no code yet • 16 Apr 2024

Image recognition techniques heavily rely on abundant labeled data, particularly in medical contexts.

Can We Break Free from Strong Data Augmentations in Self-Supervised Learning?

no code yet • 15 Apr 2024

Self-supervised learning (SSL) has emerged as a promising solution for addressing the challenge of limited labeled data in deep neural networks (DNNs), offering scalability potential.

How to build the best medical image segmentation algorithm using foundation models: a comprehensive empirical study with Segment Anything Model

no code yet • 15 Apr 2024

Automated segmentation is a fundamental medical image analysis task, which enjoys significant advances due to the advent of deep learning.

Self-Supervised Learning Featuring Small-Scale Image Dataset for Treatable Retinal Diseases Classification

no code yet • 15 Apr 2024

The proposed SSL model achieves the state-of-art accuracy of 98. 84% using only 4, 000 training images.

An Experimental Comparison Of Multi-view Self-supervised Methods For Music Tagging

no code yet • 14 Apr 2024

In this study, we expand the scope of pretext tasks applied to music by investigating and comparing the performance of new self-supervised methods for music tagging.

Label-free Anomaly Detection in Aerial Agricultural Images with Masked Image Modeling

no code yet • 13 Apr 2024

Hence, this is posed as an anomaly detection task in agricultural images.

MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild

no code yet • 13 Apr 2024

Within the field of multimodal DFER, recent methods have focused on exploiting advances of self-supervised learning (SSL) for pre-training of strong multimodal encoders.

Emerging Property of Masked Token for Effective Pre-training

no code yet • 12 Apr 2024

Initially, we delve into an exploration of the inherent properties that a masked token ought to possess.

An Effective Automated Speaking Assessment Approach to Mitigating Data Scarcity and Imbalanced Distribution

no code yet • 11 Apr 2024

Automated speaking assessment (ASA) typically involves automatic speech recognition (ASR) and hand-crafted feature extraction from the ASR transcript of a learner's speech.

Mitigating Object Dependencies: Improving Point Cloud Self-Supervised Learning through Object Exchange

no code yet • 11 Apr 2024

Subsequently, we introduce a context-aware feature learning strategy, which encodes object patterns without relying on their specific context by aggregating object features across various scenes.