Exploring the Limits of Weakly Supervised Pretraining

State-of-the-art visual perception models for a wide range of tasks rely on supervised pretraining. ImageNet classification is the de facto pretraining task for these models... (read more)

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract

Datasets


Introduced in the Paper:

IG-3.5B-17k

Mentioned in the Paper:

ImageNet COCO Places YFCC100M JFT-300M

Results from the Paper


Ranked #25 on Image Classification on ImageNet (using extra training data)

     Get a GitHub badge
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK USES EXTRA
TRAINING DATA
RESULT BENCHMARK
Image Classification ImageNet ResNeXt-101 32x48d Top 1 Accuracy 85.4% # 25
Top 5 Accuracy 97.6% # 12
Number of params 829M # 2
Image Classification ImageNet ResNeXt-101 32×16d Top 1 Accuracy 84.2% # 37
Top 5 Accuracy 97.2% # 16
Number of params 194M # 18
Image Classification ImageNet ResNeXt-101 32x32d Top 1 Accuracy 85.1% # 29
Top 5 Accuracy 97.5% # 13
Number of params 466M # 7

Results from Other Papers


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK USES EXTRA
TRAINING DATA
SOURCE PAPER COMPARE
Image Classification ImageNet ResNeXt-101 32x8d Top 1 Accuracy 82.2% # 60
Top 5 Accuracy 96.4% # 25
Number of params 88M # 34

Methods used in the Paper


METHOD TYPE
Average Pooling
Pooling Operations
ResNeXt Block
Skip Connection Blocks
Grouped Convolution
Convolutions
Global Average Pooling
Pooling Operations
Kaiming Initialization
Initialization
Residual Connection
Skip Connections
ReLU
Activation Functions
1x1 Convolution
Convolutions
Convolution
Convolutions
Random Horizontal Flip
Image Data Augmentation
Random Resized Crop
Image Data Augmentation
Batch Normalization
Normalization
SGD with Momentum
Stochastic Optimization
ResNeXt
Convolutional Neural Networks