BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models

Neural architecture search (NAS) has shown promising results discovering models that are both accurate and fast. For NAS, training a one-shot model has become a popular strategy to rank the relative quality of different architectures (child models) using a single set of shared weights. However, while one-shot model weights can effectively rank different network architectures, the absolute accuracies from these shared weights are typically far below those obtained from stand-alone training. To compensate, existing methods assume that the weights must be retrained, finetuned, or otherwise post-processed after the search is completed. These steps significantly increase the compute requirements and complexity of the architecture search and model deployment. In this work, we propose BigNAS, an approach that challenges the conventional wisdom that post-processing of the weights is necessary to get good prediction accuracies. Without extra retraining or post-processing steps, we are able to train a single set of shared weights on ImageNet and use these weights to obtain child models whose sizes range from 200 to 1000 MFLOPs. Our discovered model family, BigNASModels, achieve top-1 accuracies ranging from 76.5% to 80.9%, surpassing state-of-the-art models in this range including EfficientNets and Once-for-All networks without extra retraining or post-processing. We present ablative study and analysis to further understand the proposed BigNASModels.

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Architecture Search ImageNet BigNASModel-L Top-1 Error Rate 20.5 # 30
Accuracy 79.5 # 23
Params 6.4M # 18
MACs 586M # 126
Neural Architecture Search ImageNet BigNASModel-M Top-1 Error Rate 21.1 # 39
Accuracy 78.9 # 30
Params 5.5M # 30
MACs 418M # 113
Neural Architecture Search ImageNet BigNASModel-S Top-1 Error Rate 23.5 # 81
Accuracy 76.5 # 65
Params 4.5M # 50
MACs 242M # 80

Methods


No methods listed for this paper. Add relevant methods here