Combined Scaling for Zero-shot Transfer Learning

We present a combined scaling method called BASIC that achieves 85.7% top-1 zero-shot accuracy on the ImageNet ILSVRC-2012 validation set, surpassing the best-published zero-shot models - CLIP and ALIGN - by 9.3%. Our BASIC model also shows significant improvements in robustness benchmarks... For instance, on 5 test sets with natural distribution shifts such as ImageNet-{A,R,V2,Sketch} and ObjectNet, our model achieves 83.7% top-1 average accuracy, only a small drop from the its original ImageNet accuracy. To achieve these results, we scale up the contrastive learning framework of CLIP and ALIGN in three dimensions: data size, model size, and batch size. Our dataset has 6.6B noisy image-text pairs, which is 4x larger than ALIGN, and 16x larger than CLIP. Our largest model has 3B weights, which is 3.75x larger in parameters and 8x larger in FLOPs than ALIGN and CLIP. Our batch size is 65536 which is 2x more than CLIP and 4x more than ALIGN. The main challenge with scaling is the limited memory of our accelerators such as GPUs and TPUs. We hence propose a simple method of online gradient caching to overcome this limit. read more

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Zero-Shot Transfer Image Classification ImageNet BASIC Accuracy (Private) 85.7 # 1
Zero-Shot Transfer Image Classification ImageNet-A BASIC Accuracy (Private) 85.6 # 1
Zero-Shot Transfer Image Classification ImageNet-R BASIC Accuracy (Private) 95.7 # 1
Zero-Shot Transfer Image Classification ImageNet-Sketch BASIC Accuracy (Private) 76.1 # 1
Zero-Shot Transfer Image Classification ImageNet V2 BASIC Accuracy (Private) 80.6 # 1
Zero-Shot Transfer Image Classification ObjectNet BASIC Accuracy (Private) 78.9 # 2

Methods