AtomNAS: Fine-Grained End-to-End Neural Architecture Search

Search space design is very critical to neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit that is much smaller than the ones used in recent NAS algorithms. This search space allows a mix of operations by composing different types of atomic blocks, while the search space in previous methods only allows homogeneous operations. Based on this search space, we propose a resource-aware architecture search framework which automatically assigns the computational resources (e.g., output channel numbers) for each operation by jointly considering the performance and the computational cost. In addition, to accelerate the search process, we propose a dynamic network shrinkage technique which prunes the atomic blocks with negligible influence on outputs on the fly. Instead of a search-and-retrain two-stage paradigm, our method simultaneously searches and trains the target architecture. Our method achieves state-of-the-art performance under several FLOPs configurations on ImageNet with a small searching cost. We open our entire codebase at: https://github.com/meijieru/AtomNAS.

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Architecture Search ImageNet AtomNAS-C+† Top-1 Error Rate 22.4 # 61
Accuracy 77.6 # 49
Params 5.9M # 25
MACs 363M # 106
Neural Architecture Search ImageNet AtomNAS-A+† Top-1 Error Rate 23.7 # 86
Accuracy 76.3 # 70
Params 4.7M # 47
MACs 260M # 83
Neural Architecture Search ImageNet AtomNAS-B+† Top-1 Error Rate 22.8 # 68
Accuracy 77.2 # 55
Params 5.5M # 30
MACs 329M # 98

Methods