FBNet

Last updated on Feb 14, 2021

fbnetc_100

Parameters 6 Million
FLOPs 509 Million
File Size 21.48 MB
Training Data ImageNet
Training Resources 8x GPUs
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture 1x1 Convolution, Convolution, Dense Connections, Dropout, FBNet Block, Global Average Pooling, Softmax
ID fbnetc_100
LR 0.1
Epochs 360
Layers 22
Dropout 0.2
Crop Pct 0.875
Momentum 0.9
Batch Size 256
Image Size 224
Weight Decay 0.0005
Interpolation bilinear
SHOW MORE
SHOW LESS
README.md

Summary

FBNet is a type of convolutional neural architectures discovered through DNAS neural architecture search. It utilises a basic type of image model block inspired by MobileNetv2 that utilises depthwise convolutions and an inverted residual structure (see components).

The principal building block is the FBNet Block.

How do I load this model?

To load a pretrained model:

import timm
m = timm.create_model('fbnetc_100', pretrained=True)
m.eval()

Replace the model name with the variant you want to use, e.g. fbnetc_100. You can find the IDs in the model summaries at the top of this page.

How do I train this model?

You can follow the timm recipe scripts for training a new model afresh.

Citation

@misc{wu2019fbnet,
      title={FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search}, 
      author={Bichen Wu and Xiaoliang Dai and Peizhao Zhang and Yanghan Wang and Fei Sun and Yiming Wu and Yuandong Tian and Peter Vajda and Yangqing Jia and Kurt Keutzer},
      year={2019},
      eprint={1812.03443},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Results

Image Classification on ImageNet

Image Classification
BENCHMARK MODEL METRIC NAME METRIC VALUE GLOBAL RANK
ImageNet fbnetc_100 Top 1 Accuracy 75.12% # 227
Top 5 Accuracy 92.37% # 227