Unsupervised Pre-training

104 papers with code • 2 benchmarks • 7 datasets

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Libraries

Use these libraries to find Unsupervised Pre-training models and implementations
2 papers
29,318

Latest papers with no code

Multi-Stage Multi-Modal Pre-Training for Automatic Speech Recognition

no code yet • 28 Mar 2024

Recent advances in machine learning have demonstrated that multi-modal pre-training can improve automatic speech recognition (ASR) performance compared to randomly initialized models, even when models are fine-tuned on uni-modal tasks.

BID: Boundary-Interior Decoding for Unsupervised Temporal Action Localization Pre-Trainin

no code yet • 12 Mar 2024

Skeleton-based motion representations are robust for action localization and understanding for their invariance to perspective, lighting, and occlusion, compared with images.

On the Generalization Ability of Unsupervised Pretraining

no code yet • 11 Mar 2024

Recent advances in unsupervised learning have shown that unsupervised pre-training, followed by fine-tuning, can improve model generalization.

Attention-Guided Masked Autoencoders For Learning Image Representations

no code yet • 23 Feb 2024

Masked autoencoders (MAEs) have established themselves as a powerful method for unsupervised pre-training for computer vision tasks.

CLCE: An Approach to Refining Cross-Entropy and Contrastive Learning for Optimized Learning Fusion

no code yet • 22 Feb 2024

State-of-the-art pre-trained image models predominantly adopt a two-stage approach: initial unsupervised pre-training on large-scale datasets followed by task-specific fine-tuning using Cross-Entropy loss~(CE).

CochCeps-Augment: A Novel Self-Supervised Contrastive Learning Using Cochlear Cepstrum-based Masking for Speech Emotion Recognition

no code yet • 10 Feb 2024

Self-supervised learning (SSL) for automated speech recognition in terms of its emotional content, can be heavily degraded by the presence noise, affecting the efficiency of modeling the intricate temporal and spectral informative structures of speech.

MLIP: Enhancing Medical Visual Representation with Divergence Encoder and Knowledge-guided Contrastive Learning

no code yet • 3 Feb 2024

The scarcity of annotated data has sparked significant interest in unsupervised pre-training methods that leverage medical reports as auxiliary signals for medical visual representation learning.

Unsupervised Pre-Training for 3D Leaf Instance Segmentation

no code yet • 16 Jan 2024

Monitoring plants and measuring their traits is an important task in agriculture often referred to as plant phenotyping.

FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs

no code yet • 12 Dec 2023

We evaluate the performance-fairness trade-off for SISA, and empirically demsontrate that SISA can indeed reduce fairness in LLMs.

Unsupervised Pre-Training Using Masked Autoencoders for ECG Analysis

no code yet • 17 Oct 2023

Unsupervised learning methods have become increasingly important in deep learning due to their demonstrated large utilization of datasets and higher accuracy in computer vision and natural language processing tasks.