Training Techniques | SGD with Momentum, Weight Decay, Label Smoothing |
---|---|
Architecture | Squeeze-and-Excitation Block, 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
ID | legacy_seresnet101 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay, Label Smoothing |
---|---|
Architecture | Squeeze-and-Excitation Block, 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
ID | legacy_seresnet152 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay, Label Smoothing |
---|---|
Architecture | Squeeze-and-Excitation Block, 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
ID | legacy_seresnet18 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay, Label Smoothing |
---|---|
Architecture | Squeeze-and-Excitation Block, 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
ID | legacy_seresnet34 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay, Label Smoothing |
---|---|
Architecture | Squeeze-and-Excitation Block, 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
ID | legacy_seresnet50 |
SHOW MORE |
SE ResNet is a variant of a ResNet that employs squeeze-and-excitation blocks to enable the network to perform dynamic channel-wise feature recalibration.
To load a pretrained model:
import timm
m = timm.create_model('legacy_seresnet101', pretrained=True)
m.eval()
Replace the model name with the variant you want to use, e.g. legacy_seresnet101
. You can find the IDs in the model summaries at the top of this page.
You can follow the timm recipe scripts for training a new model afresh.
@misc{hu2019squeezeandexcitation,
title={Squeeze-and-Excitation Networks},
author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
year={2019},
eprint={1709.01507},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
MODEL | TOP 1 ACCURACY | TOP 5 ACCURACY |
---|---|---|
legacy_seresnet152 | 78.67% | 94.38% |
legacy_seresnet101 | 78.38% | 94.26% |
legacy_seresnet50 | 77.64% | 93.74% |
legacy_seresnet34 | 74.79% | 92.13% |
legacy_seresnet18 | 71.74% | 90.34% |