Contrastive Quant: Quantization Makes Stronger Contrastive Learning

29 Sep 2021  ·  Yonggan Fu, Qixuan Yu, Meng Li, Xu Ouyang, Vikas Chandra, Yingyan Lin ·

Contrastive learning, which learns visual representations by enforcing feature consistency under different augmented views, has emerged as one of the most effective unsupervised learning methods. In this work, we explore contrastive learning from a new perspective, inspired by the recent works showing that properly designed weight perturbations or quantization help the models learn a smoother loss landscape. Interestingly, we find that quantization, when properly engineered, can enhance the effectiveness of contrastive learning. To this end, we propose a novel contrastive learning framework, dubbed Contrastive Quant, to encourage the feature consistency under both (1) differently augmented inputs via various data transformations and (2) differently augmented weights/activations via various quantization levels, where the feature consistency under injected noises via quantization can be viewed as augmentations on both model weights and intermediate activations as a complement to the input augmentations. Extensive experiments, built on top of two state-of-the-art contrastive learning methods SimCLR and BYOL, show that Contrastive Quant consistently improves the learned visual representation, especially with limited labeled data under semi-supervised scenarios. For example, our Contrastive Quant achieves a 8.69% and 10.27% higher accuracy on ResNet-18 and ResNet-34, respectively, on ImageNet when fine-tuning with 10% labeled data. We believe this work has opened up a new perspective for future contrastive learning innovations. All codes will be released upon acceptance.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods