Searching for A Robust Neural Architecture in Four GPU Hours

CVPR 2019  ·  Xuanyi Dong, Yi Yang ·

Conventional neural architecture search (NAS) approaches are based on reinforcement learning or evolutionary strategy, which take more than 3000 GPU hours to find a good model on CIFAR-10. We propose an efficient NAS approach learning to search by gradient descent. Our approach represents the search space as a directed acyclic graph (DAG). This DAG contains billions of sub-graphs, each of which indicates a kind of neural architecture. To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG. This sampler is learnable and optimized by the validation loss after training the sampled architecture. In this way, our approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS). In experiments, we can finish one searching procedure in four GPU hours on CIFAR-10, and the discovered model obtains a test error of 2.82\% with only 2.5M parameters, which is on par with the state-of-the-art. Code is publicly available on GitHub: https://github.com/D-X-Y/NAS-Projects.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Architecture Search CIFAR-10 GDAS (FRC) Top-1 Error Rate 2.5% # 18
Search Time (GPU days) 0.17 # 6
Neural Architecture Search CIFAR-10 GDAS Top-1 Error Rate 3.4% # 40
Search Time (GPU days) 0.21 # 7
Neural Architecture Search NAS-Bench-201, CIFAR-10 GDAS Accuracy (Test) 93.61 # 21
Accuracy (Val) 89.89 # 22
Search time (s) 28926 # 13
Neural Architecture Search NAS-Bench-201, ImageNet-16-120 GDAS Accuracy (Test) 41.71 # 34
Search time (s) 28926 # 15

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Neural Architecture Search NAS-Bench-201, CIFAR-100 GDAS Accuracy (Test) 70.70 # 25
Accuracy (Val) 71.34 # 20
Search time (s) 28926 # 11

Methods