Learning Transferable Architectures for Scalable Image Recognition

CVPR 2018 Barret Zoph • Vijay Vasudevan • Jonathon Shlens • Quoc V. Le

In this paper, we study a method to learn the model architectures directly on the dataset of interest. In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, named "NASNet architecture". For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms.

Full paper

Evaluation


Task Dataset Model Metric name Metric value Global rank Compare
Image Classification ImageNet NASNET-A(6) Top 1 Accuracy 82.7% # 3
Image Classification ImageNet NASNET-A(6) Top 5 Accuracy 96.2% # 3