EVA-CLIP: Improved Training Techniques for CLIP at Scale

27 Mar 2023  ·  Quan Sun, Yuxin Fang, Ledell Wu, Xinlong Wang, Yue Cao ·

Contrastive language-image pre-training, CLIP for short, has gained increasing attention for its potential in various scenarios. In this paper, we propose EVA-CLIP, a series of models that significantly improve the efficiency and effectiveness of CLIP training. Our approach incorporates new techniques for representation learning, optimization, and augmentation, enabling EVA-CLIP to achieve superior performance compared to previous CLIP models with the same number of parameters but significantly smaller training costs. Notably, our largest 5.0B-parameter EVA-02-CLIP-E/14+ with only 9 billion seen samples achieves 82.0 zero-shot top-1 accuracy on ImageNet-1K val. A smaller EVA-02-CLIP-L/14+ with only 430 million parameters and 6 billion seen samples achieves 80.4 zero-shot top-1 accuracy on ImageNet-1K val. To facilitate open access and open research, we release the complete suite of EVA-CLIP to the community at https://github.com/baaivision/EVA/tree/master/EVA-CLIP.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Zero-Shot Transfer Image Classification Food-101 EVA-CLIP-E/14+ Top 1 Accuracy 94.9 # 4
Zero-Shot Transfer Image Classification ImageNet EVA-CLIP-E/14+ Accuracy (Private) 82 # 13
Zero-Shot Transfer Image Classification ImageNet-A EVA-CLIP-E/14+ Accuracy (Private) 82.1 # 8
Zero-Shot Transfer Image Classification ImageNet-R EVA-CLIP-E/14+ Accuracy 94.5 # 7
Zero-Shot Transfer Image Classification ImageNet-Sketch EVA-CLIP-E/14+ Accuracy (Private) 71.6 # 6
Zero-Shot Transfer Image Classification ImageNet V2 EVA-CLIP-E/14+ Accuracy (Private) 75.7 # 9
Zero-Shot Transfer Image Classification ObjectNet EVA-CLIP-E/14+ Accuracy (Private) 79.6 # 7
Image Classification ObjectNet EVA-02-CLIP-E/14+ Top-1 Accuracy 79.6 # 4
Zero-Shot Action Recognition UCF101 EVA-CLIP-E/14+ Top-1 Accuracy 83.1 # 7

Methods