Self-Supervised Learning
1719 papers with code • 10 benchmarks • 41 datasets
Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.
Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration
Image source: LeCun
Libraries
Use these libraries to find Self-Supervised Learning models and implementationsDatasets
Latest papers with no code
Integration of Self-Supervised BYOL in Semi-Supervised Medical Image Recognition
Image recognition techniques heavily rely on abundant labeled data, particularly in medical contexts.
Can We Break Free from Strong Data Augmentations in Self-Supervised Learning?
Self-supervised learning (SSL) has emerged as a promising solution for addressing the challenge of limited labeled data in deep neural networks (DNNs), offering scalability potential.
How to build the best medical image segmentation algorithm using foundation models: a comprehensive empirical study with Segment Anything Model
Automated segmentation is a fundamental medical image analysis task, which enjoys significant advances due to the advent of deep learning.
Self-Supervised Learning Featuring Small-Scale Image Dataset for Treatable Retinal Diseases Classification
The proposed SSL model achieves the state-of-art accuracy of 98. 84% using only 4, 000 training images.
An Experimental Comparison Of Multi-view Self-supervised Methods For Music Tagging
In this study, we expand the scope of pretext tasks applied to music by investigating and comparing the performance of new self-supervised methods for music tagging.
Label-free Anomaly Detection in Aerial Agricultural Images with Masked Image Modeling
Hence, this is posed as an anomaly detection task in agricultural images.
MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild
Within the field of multimodal DFER, recent methods have focused on exploiting advances of self-supervised learning (SSL) for pre-training of strong multimodal encoders.
Emerging Property of Masked Token for Effective Pre-training
Initially, we delve into an exploration of the inherent properties that a masked token ought to possess.
An Effective Automated Speaking Assessment Approach to Mitigating Data Scarcity and Imbalanced Distribution
Automated speaking assessment (ASA) typically involves automatic speech recognition (ASR) and hand-crafted feature extraction from the ASR transcript of a learner's speech.
Mitigating Object Dependencies: Improving Point Cloud Self-Supervised Learning through Object Exchange
Subsequently, we introduce a context-aware feature learning strategy, which encodes object patterns without relying on their specific context by aggregating object features across various scenes.