Geometry-Aware Gradient Algorithms for Neural Architecture Search

Recent state-of-the-art methods for neural architecture search (NAS) exploit gradient-based optimization by relaxing the problem into continuous optimization over architectures and shared-weights, a noisy process that remains poorly understood. We argue for the study of single-level empirical risk minimization to understand NAS with weight-sharing, reducing the design of NAS methods to devising optimizers and regularizers that can quickly obtain high-quality solutions to this problem. Invoking the theory of mirror descent, we present a geometry-aware framework that exploits the underlying structure of this optimization to return sparse architectural parameters, leading to simple yet novel algorithms that enjoy fast convergence guarantees and achieve state-of-the-art accuracy on the latest NAS benchmarks in computer vision. Notably, we exceed the best published results for both CIFAR and ImageNet on both the DARTS search space and NAS-Bench201; on the latter we achieve near-oracle-optimal performance on CIFAR-10 and CIFAR-100. Together, our theory and experiments demonstrate a principled way to co-design optimizers and continuous relaxations of discrete NAS search spaces.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Architecture Search ImageNet GAEA PC-DARTS Top-1 Error Rate 24 # 95
Params 5.6 # 62
Neural Architecture Search NAS-Bench-201, CIFAR-10 GAEA DARTS (ERM) Accuracy (Test) 94.1 # 14
Neural Architecture Search NAS-Bench-201, CIFAR-100 GAEA DARTS (ERM) Accuracy (Test) 73.43 # 7
Neural Architecture Search NAS-Bench-201, ImageNet-16-120 GAEA DARTS (ERM) Accuracy (Test) 46.36 # 10

Methods