MnasNet: Platform-Aware Neural Architecture Search for Mobile

Designing convolutional neural networks (CNN) for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant efforts have been dedicated to design and improve mobile CNNs on all dimensions, it is very difficult to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike previous work, where latency is considered via another, often inaccurate proxy (e.g., FLOPS), our approach directly measures real-world inference latency by executing the model on mobile phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that encourages layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our MnasNet achieves 75.2% top-1 accuracy with 78ms latency on a Pixel phone, which is 1.8x faster than MobileNetV2 [29] with 0.5% higher accuracy and 2.3x faster than NASNet [36] with 1.2% higher accuracy. Our MnasNet also achieves better mAP quality than MobileNets for COCO object detection. Code is at https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Real-Time Object Detection COCO MobileNetV2 + SSDLite MAP 22.1 # 8
Image Classification ImageNet MnasNet-A1 Top 1 Accuracy 75.2% # 531
Top 5 Accuracy 92.5% # 193
Number of params 3.9M # 354
Image Classification ImageNet MnasNet-A2 Top 1 Accuracy 75.6% # 522
Top 5 Accuracy 92.7% # 185
Number of params 4.8M # 341
Image Classification ImageNet MnasNet-A3 Top 1 Accuracy 76.7% # 492
Top 5 Accuracy 93.3% # 170
Number of params 5.2M # 333
Hardware Burden None # 1
Operations per network pass 0.0403G # 1

Methods