Training Techniques | RMSProp, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Batch Normalization, Convolution, Depthwise Separable Convolution, Dropout, Inverted Residual Block, Residual Connection, ReLU6, Max Pooling, Softmax |
ID | mobilenetv2_100 |
SHOW MORE |
Training Techniques | RMSProp, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Batch Normalization, Convolution, Depthwise Separable Convolution, Dropout, Inverted Residual Block, Residual Connection, ReLU6, Max Pooling, Softmax |
ID | mobilenetv2_110d |
SHOW MORE |
Training Techniques | RMSProp, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Batch Normalization, Convolution, Depthwise Separable Convolution, Dropout, Inverted Residual Block, Residual Connection, ReLU6, Max Pooling, Softmax |
ID | mobilenetv2_120d |
SHOW MORE |
Training Techniques | RMSProp, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Batch Normalization, Convolution, Depthwise Separable Convolution, Dropout, Inverted Residual Block, Residual Connection, ReLU6, Max Pooling, Softmax |
ID | mobilenetv2_140 |
SHOW MORE |
MobileNetV2 is a convolutional neural network architecture that seeks to perform well on mobile devices. It is based on an inverted residual structure where the residual connections are between the bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. As a whole, the architecture of MobileNetV2 contains the initial fully convolution layer with 32 filters, followed by 19 residual bottleneck layers.
To load a pretrained model:
import timm
m = timm.create_model('mobilenetv2_100', pretrained=True)
m.eval()
Replace the model name with the variant you want to use, e.g. mobilenetv2_100
. You can find the IDs in the model summaries at the top of this page.
You can follow the timm recipe scripts for training a new model afresh.
@article{DBLP:journals/corr/abs-1801-04381,
author = {Mark Sandler and
Andrew G. Howard and
Menglong Zhu and
Andrey Zhmoginov and
Liang{-}Chieh Chen},
title = {Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification,
Detection and Segmentation},
journal = {CoRR},
volume = {abs/1801.04381},
year = {2018},
url = {http://arxiv.org/abs/1801.04381},
archivePrefix = {arXiv},
eprint = {1801.04381},
timestamp = {Tue, 12 Jan 2021 15:30:06 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1801-04381.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
MODEL | TOP 1 ACCURACY | TOP 5 ACCURACY |
---|---|---|
mobilenetv2_120d | 77.28% | 93.51% |
mobilenetv2_140 | 76.51% | 93.0% |
mobilenetv2_110d | 75.05% | 92.19% |
mobilenetv2_100 | 72.95% | 91.0% |