Parametric Contrastive Learning

ICCV 2021  ·  Jiequan Cui, Zhisheng Zhong, Shu Liu, Bei Yu, Jiaya Jia ·

In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our PaCo loss under a balanced setting. Our analysis demonstrates that PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist 2018 manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models trained with PaCo loss surpass supervised contrastive learning across various ResNet backbones, e.g., our ResNet-200 achieves 81.8% top-1 accuracy. Our code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Long-tail Learning CIFAR-100-LT (ρ=100) PCL Error Rate 49.10 # 25
Long-tail Learning CIFAR-10-LT (ρ=10) PCL Error Rate 9.14 # 14
Image Classification ImageNet ResNet-152 Top 1 Accuracy 81.3% # 595
Image Classification ImageNet ResNet-200 Top 1 Accuracy 81.8% # 553
Image Classification ImageNet ResNet-101 Top 1 Accuracy 80.9% # 618
Long-tail Learning ImageNet-LT PaCo(ResNeXt-50) Top-1 Accuracy 58.2 # 19
Long-tail Learning ImageNet-LT PaCo(ResNeXt101-32x4d) Top-1 Accuracy 60.0 # 14
Image Classification iNaturalist 2018 PaCo(ResNet-152) Top-1 Accuracy 75.2% # 23
Long-tail Learning iNaturalist 2018 PaCo(ResNet-152) Top-1 Accuracy 75.2% # 12
Long-tail Learning Places-LT PaCo Top-1 Accuracy 41.2 # 14

Methods