Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Efficient Channel Attention, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax, Squeeze-and-Excitation Block |
ID | ecaresnet101d |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Efficient Channel Attention, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax, Squeeze-and-Excitation Block |
ID | ecaresnet101d_pruned |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Efficient Channel Attention, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax, Squeeze-and-Excitation Block |
ID | ecaresnet50d |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Efficient Channel Attention, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax, Squeeze-and-Excitation Block |
ID | ecaresnet50d_pruned |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Efficient Channel Attention, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax, Squeeze-and-Excitation Block |
ID | ecaresnetlight |
SHOW MORE |
An ECA ResNet is a variant on a ResNet that utilises an Efficient Channel Attention module. Efficient Channel Attention is an architectural unit based on squeeze-and-excitation blocks that reduces model complexity without dimensionality reduction.
To load a pretrained model:
import timm
m = timm.create_model('ecaresnet50d', pretrained=True)
m.eval()
Replace the model name with the variant you want to use, e.g. ecaresnet50d
. You can find the IDs in the model summaries at the top of this page.
You can follow the timm recipe scripts for training a new model afresh.
@misc{wang2020ecanet,
title={ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks},
author={Qilong Wang and Banggu Wu and Pengfei Zhu and Peihua Li and Wangmeng Zuo and Qinghua Hu},
year={2020},
eprint={1910.03151},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
MODEL | TOP 1 ACCURACY | TOP 5 ACCURACY |
---|---|---|
ecaresnet101d | 82.18% | 96.06% |
ecaresnet101d_pruned | 80.82% | 95.64% |
ecaresnet50d | 80.61% | 95.31% |
ecaresnetlight | 80.46% | 95.25% |
ecaresnet50d_pruned | 79.71% | 94.88% |