Res2Net

Last updated on Feb 14, 2021

res2net101_26w_4s

Parameters 45 Million
FLOPs 10 Billion
File Size 173.05 MB
Training Data ImageNet
Training Resources 4x Titan Xp GPUs
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture Batch Normalization, Convolution, Global Average Pooling, Res2Net Block, ReLU
ID res2net101_26w_4s
LR 0.1
Epochs 100
Crop Pct 0.875
Momentum 0.9
Batch Size 256
Image Size 224
Weight Decay 0.0001
Interpolation bilinear
SHOW MORE
SHOW LESS
res2net50_14w_8s

Parameters 25 Million
FLOPs 5 Billion
File Size 95.98 MB
Training Data ImageNet
Training Resources 4x Titan Xp GPUs
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture Batch Normalization, Convolution, Global Average Pooling, Res2Net Block, ReLU
ID res2net50_14w_8s
LR 0.1
Epochs 100
Crop Pct 0.875
Momentum 0.9
Batch Size 256
Image Size 224
Weight Decay 0.0001
Interpolation bilinear
SHOW MORE
SHOW LESS
res2net50_26w_4s

Parameters 26 Million
FLOPs 5 Billion
File Size 98.33 MB
Training Data ImageNet
Training Resources 4x Titan Xp GPUs
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture Batch Normalization, Convolution, Global Average Pooling, Res2Net Block, ReLU
ID res2net50_26w_4s
LR 0.1
Epochs 100
Crop Pct 0.875
Momentum 0.9
Batch Size 256
Image Size 224
Weight Decay 0.0001
Interpolation bilinear
SHOW MORE
SHOW LESS
res2net50_26w_6s

Parameters 37 Million
FLOPs 8 Billion
File Size 141.72 MB
Training Data ImageNet
Training Resources 4x Titan Xp GPUs
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture Batch Normalization, Convolution, Global Average Pooling, Res2Net Block, ReLU
ID res2net50_26w_6s
LR 0.1
Epochs 100
Crop Pct 0.875
Momentum 0.9
Batch Size 256
Image Size 224
Weight Decay 0.0001
Interpolation bilinear
SHOW MORE
SHOW LESS
res2net50_26w_8s

Parameters 48 Million
FLOPs 11 Billion
File Size 185.09 MB
Training Data ImageNet
Training Resources 4x Titan Xp GPUs
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture Batch Normalization, Convolution, Global Average Pooling, Res2Net Block, ReLU
ID res2net50_26w_8s
LR 0.1
Epochs 100
Crop Pct 0.875
Momentum 0.9
Batch Size 256
Image Size 224
Weight Decay 0.0001
Interpolation bilinear
SHOW MORE
SHOW LESS
res2net50_48w_2s

Parameters 25 Million
FLOPs 5 Billion
File Size 96.72 MB
Training Data ImageNet
Training Resources 4x Titan Xp GPUs
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture Batch Normalization, Convolution, Global Average Pooling, Res2Net Block, ReLU
ID res2net50_48w_2s
LR 0.1
Epochs 100
Crop Pct 0.875
Momentum 0.9
Batch Size 256
Image Size 224
Weight Decay 0.0001
Interpolation bilinear
SHOW MORE
SHOW LESS
README.md

Summary

Res2Net is an image model that employs a variation on bottleneck residual blocks, Res2Net Blocks. The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-like connections within one single residual block. This represents multi-scale features at a granular level and increases the range of receptive fields for each network layer.

How do I load this model?

To load a pretrained model:

import timm
m = timm.create_model('res2net50_14w_8s', pretrained=True)
m.eval()

Replace the model name with the variant you want to use, e.g. res2net50_14w_8s. You can find the IDs in the model summaries at the top of this page.

How do I train this model?

You can follow the timm recipe scripts for training a new model afresh.

Citation

@article{Gao_2021,
   title={Res2Net: A New Multi-Scale Backbone Architecture},
   volume={43},
   ISSN={1939-3539},
   url={http://dx.doi.org/10.1109/TPAMI.2019.2938758},
   DOI={10.1109/tpami.2019.2938758},
   number={2},
   journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
   publisher={Institute of Electrical and Electronics Engineers (IEEE)},
   author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
   year={2021},
   month={Feb},
   pages={652–662}
}

Results

Image Classification on ImageNet

Image Classification on ImageNet
MODEL TOP 1 ACCURACY TOP 5 ACCURACY
res2net50_26w_8s 79.19% 94.37%
res2net101_26w_4s 79.19% 94.43%
res2net50_26w_6s 78.57% 94.12%
res2net50_14w_8s 78.14% 93.86%
res2net50_26w_4s 77.99% 93.85%
res2net50_48w_2s 77.53% 93.56%