Divide and Contrast: Self-supervised Learning from Uncurated Data
Self-supervised learning holds promise in leveraging large amounts of unlabeled data, however much of its progress has thus far been limited to highly curated pre-training data such as ImageNet. We explore the effects of contrastive learning from larger, less-curated image datasets such as YFCC, and find there is indeed a large difference in the resulting representation quality. We hypothesize that this curation gap is due to a shift in the distribution of image classes -- which is more diverse and heavy-tailed -- resulting in less relevant negative samples to learn from. We test this hypothesis with a new approach, Divide and Contrast (DnC), which alternates between contrastive learning and clustering-based hard negative mining. When pretrained on less curated datasets, DnC greatly improves the performance of self-supervised learning on downstream tasks, while remaining competitive with the current state-of-the-art on curated datasets.
PDF Abstract ICCV 2021 PDF ICCV 2021 AbstractResults from the Paper
Task | Dataset | Model | Metric Name | Metric Value | Global Rank | Benchmark |
---|---|---|---|---|---|---|
Self-Supervised Image Classification | ImageNet | DnC (ResNet-50) | Top 1 Accuracy | 75.8% | # 63 | |
Number of Params | 24M | # 48 | ||||
Self-Supervised Image Classification | ImageNet (finetuned) | DnC (Resnet-50) | Top 1 Accuracy | 78.2% | # 60 |