BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search

18 Sep 2020  ·  Zixiang Ding, Yaran Chen, Nannan Li, Dongbin Zhao ·

In this paper, we propose BNAS-v2 to further improve the efficiency of NAS, embodying both superiorities of BCNN simultaneously. To mitigate the unfair training issue of BNAS, we employ continuous relaxation strategy to make each edge of cell in BCNN relevant to all candidate operations for over-parameterized BCNN construction. Moreover, the continuous relaxation strategy relaxes the choice of a candidate operation as a softmax over all predefined operations. Consequently, BNAS-v2 employs the gradient-based optimization algorithm to simultaneously update every possible path of over-parameterized BCNN, rather than the single sampled one as BNAS. However, continuous relaxation leads to another issue named performance collapse, in which those weight-free operations are prone to be selected by the search strategy. For this consequent issue, two solutions are given: 1) we propose Confident Learning Rate (CLR) that considers the confidence of gradient for architecture weights update, increasing with the training time of over-parameterized BCNN; 2) we introduce the combination of partial channel connections and edge normalization that also can improve the memory efficiency further. Moreover, we denote differentiable BNAS (i.e. BNAS with continuous relaxation) as BNAS-D, BNAS-D with CLR as BNAS-v2-CLR, and partial-connected BNAS-D as BNAS-v2-PC. Experimental results on CIFAR-10 and ImageNet show that 1) BNAS-v2 delivers state-of-the-art search efficiency on both CIFAR-10 (0.05 GPU days that is 4x faster than BNAS) and ImageNet (0.19 GPU days); and 2) the proposed CLR is effective to alleviate the performance collapse issue in both BNAS-D and vanilla differentiable NAS framework.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods