DARTS-: Robustly Stepping out of Performance Collapse Without Indicators

Despite the fast development of differentiable architecture search (DARTS), it suffers from long-standing performance instability, which extremely limits its application. Existing robustifying methods draw clues from the resulting deteriorated behavior instead of finding out its causing factor. Various indicators such as Hessian eigenvalues are proposed as a signal to stop searching before the performance collapses. However, these indicator-based methods tend to easily reject good architectures if the thresholds are inappropriately set, let alone the searching is intrinsically noisy. In this paper, we undertake a more subtle and direct approach to resolve the collapse. We first demonstrate that skip connections have a clear advantage over other candidate operations, where it can easily recover from a disadvantageous state and become dominant. We conjecture that this privilege is causing degenerated performance. Therefore, we propose to factor out this benefit with an auxiliary skip connection, ensuring a fairer competition for all operations. We call this approach DARTS-. Extensive experiments on various datasets verify that it can substantially improve robustness. Our code is available at https://github.com/Meituan-AutoML/DARTS- .

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Architecture Search NAS-Bench-201, CIFAR-10 DARTS- Accuracy (Test) 93.80 # 20
Accuracy (Val) 91.03 # 18
Search time (s) 11520 # 10
Neural Architecture Search NAS-Bench-201, CIFAR-100 DARTS- Accuracy (Test) 71.53 # 21
Accuracy (Val) 71.36 # 19
Neural Architecture Search NAS-Bench-201, ImageNet-16-120 DARTS- Accuracy (Test) 45.12 # 27
Accuracy (Val) 44.87 # 14

Methods


No methods listed for this paper. Add relevant methods here