DARTS: Differentiable Architecture Search

ICLR 2019  ·  Hanxiao Liu, Karen Simonyan, Yiming Yang ·

This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Architecture Search CIFAR-10 DARTS (second order) Top-1 Error Rate 2.76% # 31
Search Time (GPU days) 4 # 26
Parameters 3.3 # 2
Neural Architecture Search CIFAR-10 DARTS (first order) Top-1 Error Rate 3% # 34
Search Time (GPU days) 1.5 # 23
Parameters 3.3 # 2
Neural Architecture Search CIFAR-10 Image Classification DARTS + c/o Percentage error 2.83 # 16
Params 3.4M # 6
Search Time (GPU days) 4 # 1
Neural Architecture Search ImageNet DARTS Top-1 Error Rate 26.7 # 123
Accuracy 73.3 # 100
Params 4.9 # 67
MACs 595M # 128
Neural Architecture Search NAS-Bench-201, ImageNet-16-120 DARTS (second order) Accuracy (Test) 16.43 # 40
Search time (s) 29902 # 16
Neural Architecture Search NAS-Bench-201, ImageNet-16-120 DARTS (first order) Accuracy (Test) 16.43 # 40
Search time (s) 10890 # 11
Language Modelling Penn Treebank (Word Level) Differentiable NAS Validation perplexity 58.3 # 22
Test perplexity 56.1 # 26
Params 23M # 19

Methods