FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions

Differentiable Neural Architecture Search (DNAS) has demonstrated great success in designing state-of-the-art, efficient neural networks. However, DARTS-based DNAS's search space is small when compared to other search methods', since all candidate network layers must be explicitly instantiated in memory. To address this bottleneck, we propose a memory and computationally efficient DNAS variant: DMaskingNAS. This algorithm expands the search space by up to $10^{14}\times$ over conventional DNAS, supporting searches over spatial and channel dimensions that are otherwise prohibitively expensive: input resolution and number of filters. We propose a masking mechanism for feature map reuse, so that memory and computational costs stay nearly constant as the search space expands. Furthermore, we employ effective shape propagation to maximize per-FLOP or per-parameter accuracy. The searched FBNetV2s yield state-of-the-art performance when compared with all previous architectures. With up to 421$\times$ less search cost, DMaskingNAS finds models with 0.9% higher accuracy, 15% fewer FLOPs than MobileNetV3-Small; and with similar accuracy but 20% fewer FLOPs than Efficient-B0. Furthermore, our FBNetV2 outperforms MobileNetV3 by 2.6% in accuracy, with equivalent model size. FBNetV2 models are open-sourced at https://github.com/facebookresearch/mobile-vision.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Architecture Search ImageNet FBNetV2-L1 Top-1 Error Rate 22.8 # 68
Accuracy 77.2 # 55
MACs 325M # 96
Neural Architecture Search ImageNet FBNetV2-F4 Top-1 Error Rate 24.0 # 95
Accuracy 76.0 # 75
MACs 238M # 79
Neural Architecture Search ImageNet FBNetV2-F3 Top-1 Error Rate 26.8 # 125
Accuracy 73.2 # 102
MACs 126M # 69
Neural Architecture Search ImageNet FBNetV2-F1 Top-1 Error Rate 31.7 # 130
Accuracy 68.3 # 106
MACs 56M # 65

Methods