AdvProp

Last updated on Feb 14, 2021

tf_efficientnet_b0_ap

Parameters 5 Million
FLOPs 489 Million
File Size 20.40 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques AdvProp, RMSProp, Weight Decay, Label Smoothing, AutoAugment, Stochastic Depth
Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, Squeeze-and-Excitation Block, Swish
ID tf_efficientnet_b0_ap
LR 0.256
Epochs 350
Crop Pct 0.875
Momentum 0.9
Batch Size 2048
Image Size 224
Weight Decay 0.00001
Interpolation bicubic
RMSProp Decay 0.9
Label Smoothing 0.1
BatchNorm Momentum 0.99
SHOW MORE
SHOW LESS
tf_efficientnet_b1_ap

Parameters 8 Million
FLOPs 884 Million
File Size 30.06 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques AdvProp, RMSProp, Weight Decay, Label Smoothing, AutoAugment, Stochastic Depth
Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, Squeeze-and-Excitation Block, Swish
ID tf_efficientnet_b1_ap
LR 0.256
Epochs 350
Crop Pct 0.882
Momentum 0.9
Batch Size 2048
Image Size 240
Weight Decay 0.00001
Interpolation bicubic
RMSProp Decay 0.9
Label Smoothing 0.1
BatchNorm Momentum 0.99
SHOW MORE
SHOW LESS
tf_efficientnet_b2_ap

Parameters 9 Million
FLOPs 1 Billion
File Size 35.10 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques AdvProp, RMSProp, Weight Decay, Label Smoothing, AutoAugment, Stochastic Depth
Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, Squeeze-and-Excitation Block, Swish
ID tf_efficientnet_b2_ap
LR 0.256
Epochs 350
Crop Pct 0.89
Momentum 0.9
Batch Size 2048
Image Size 260
Weight Decay 0.00001
Interpolation bicubic
RMSProp Decay 0.9
Label Smoothing 0.1
BatchNorm Momentum 0.99
SHOW MORE
SHOW LESS
tf_efficientnet_b3_ap

Parameters 12 Million
FLOPs 2 Billion
File Size 47.10 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques AdvProp, RMSProp, Weight Decay, Label Smoothing, AutoAugment, Stochastic Depth
Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, Squeeze-and-Excitation Block, Swish
ID tf_efficientnet_b3_ap
LR 0.256
Epochs 350
Crop Pct 0.904
Momentum 0.9
Batch Size 2048
Image Size 300
Weight Decay 0.00001
Interpolation bicubic
RMSProp Decay 0.9
Label Smoothing 0.1
BatchNorm Momentum 0.99
SHOW MORE
SHOW LESS
tf_efficientnet_b4_ap

Parameters 19 Million
FLOPs 6 Billion
File Size 74.38 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques AdvProp, RMSProp, Weight Decay, Label Smoothing, AutoAugment, Stochastic Depth
Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, Squeeze-and-Excitation Block, Swish
ID tf_efficientnet_b4_ap
LR 0.256
Epochs 350
Crop Pct 0.922
Momentum 0.9
Batch Size 2048
Image Size 380
Weight Decay 0.00001
Interpolation bicubic
RMSProp Decay 0.9
Label Smoothing 0.1
BatchNorm Momentum 0.99
SHOW MORE
SHOW LESS
tf_efficientnet_b5_ap

Parameters 30 Million
FLOPs 13 Billion
File Size 116.73 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques AdvProp, RMSProp, Weight Decay, Label Smoothing, AutoAugment, Stochastic Depth
Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, Squeeze-and-Excitation Block, Swish
ID tf_efficientnet_b5_ap
LR 0.256
Epochs 350
Crop Pct 0.934
Momentum 0.9
Batch Size 2048
Image Size 456
Weight Decay 0.00001
Interpolation bicubic
RMSProp Decay 0.9
Label Smoothing 0.1
BatchNorm Momentum 0.99
SHOW MORE
SHOW LESS
tf_efficientnet_b6_ap

Parameters 43 Million
FLOPs 24 Billion
File Size 165.21 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques AdvProp, RMSProp, Weight Decay, Label Smoothing, AutoAugment, Stochastic Depth
Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, Squeeze-and-Excitation Block, Swish
ID tf_efficientnet_b6_ap
LR 0.256
Epochs 350
Crop Pct 0.942
Momentum 0.9
Batch Size 2048
Image Size 528
Weight Decay 0.00001
Interpolation bicubic
RMSProp Decay 0.9
Label Smoothing 0.1
BatchNorm Momentum 0.99
SHOW MORE
SHOW LESS
tf_efficientnet_b7_ap

Parameters 66 Million
FLOPs 48 Billion
File Size 254.49 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques AdvProp, RMSProp, Weight Decay, Label Smoothing, AutoAugment, Stochastic Depth
Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, Squeeze-and-Excitation Block, Swish
ID tf_efficientnet_b7_ap
LR 0.256
Epochs 350
Crop Pct 0.949
Momentum 0.9
Batch Size 2048
Image Size 600
Weight Decay 0.00001
Interpolation bicubic
RMSProp Decay 0.9
Label Smoothing 0.1
BatchNorm Momentum 0.99
SHOW MORE
SHOW LESS
tf_efficientnet_b8_ap

Parameters 87 Million
FLOPs 81 Billion
File Size 335.13 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques AdvProp, RMSProp, Weight Decay, Label Smoothing, AutoAugment, Stochastic Depth
Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, Squeeze-and-Excitation Block, Swish
ID tf_efficientnet_b8_ap
LR 0.128
Epochs 350
Crop Pct 0.954
Momentum 0.9
Batch Size 2048
Image Size 672
Weight Decay 0.00001
Interpolation bicubic
RMSProp Decay 0.9
Label Smoothing 0.1
BatchNorm Momentum 0.99
SHOW MORE
SHOW LESS
README.md

Summary

AdvProp is an adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to the method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples.

How do I load this model?

To load a pretrained model:

import timm
m = timm.create_model('tf_efficientnet_b0_ap', pretrained=True)
m.eval()

Replace the model name with the variant you want to use, e.g. tf_efficientnet_b0_ap. You can find the IDs in the model summaries at the top of this page.

How do I train this model?

You can follow the timm recipe scripts for training a new model afresh.

Citation

@misc{xie2020adversarial,
      title={Adversarial Examples Improve Image Recognition}, 
      author={Cihang Xie and Mingxing Tan and Boqing Gong and Jiang Wang and Alan Yuille and Quoc V. Le},
      year={2020},
      eprint={1911.09665},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Results

Image Classification on ImageNet

Image Classification on ImageNet
MODEL TOP 1 ACCURACY TOP 5 ACCURACY
tf_efficientnet_b8_ap 85.37% 97.3%
tf_efficientnet_b7_ap 85.12% 97.25%
tf_efficientnet_b6_ap 84.79% 97.14%
tf_efficientnet_b5_ap 84.25% 96.97%
tf_efficientnet_b4_ap 83.26% 96.39%
tf_efficientnet_b3_ap 81.82% 95.62%
tf_efficientnet_b2_ap 80.3% 95.03%
tf_efficientnet_b1_ap 79.28% 94.3%
tf_efficientnet_b0_ap 77.1% 93.26%