1119 papers with code • 1 benchmarks • 9 datasets
This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings.
In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much.
Neural net classifiers trained on data with annotated class labels can also capture apparent visual similarity among categories without being directed to do so.
The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images.
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning
We argue that the power of contrastive learning has yet to be fully unleashed, as current methods are trained only on instance-level pretext tasks, leading to representations that may be sub-optimal for downstream tasks requiring dense pixel predictions.