Progressive Neural Architecture Search

We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Architecture Search ImageNet PNAS Params 5.1 # 65
Image Classification ImageNet PNASNet-5 Top 1 Accuracy 82.9% # 411
Top 5 Accuracy 96.2 # 87
Number of params 86.1M # 777
Operations per network pass 2.5G # 1
GFLOPs 50 # 412

Results from Other Papers

Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Neural Architecture Search NAS-Bench-201, ImageNet-16-120 PNAS + Accuracy (Val) 44.75 # 14