Scalable NAS with Factorizable Architectural Parameters

31 Dec 2019  ·  Lanfei Wang, Lingxi Xie, Tianyi Zhang, Jun Guo, Qi Tian ·

Neural Architecture Search (NAS) is an emerging topic in machine learning and computer vision. The fundamental ideology of NAS is using an automatic mechanism to replace manual designs for exploring powerful network architectures. One of the key factors of NAS is to scale-up the search space, e.g., increasing the number of operators, so that more possibilities are covered, but existing search algorithms often get lost in a large number of operators. For avoiding huge computing and competition among similar operators in the same pool, this paper presents a scalable algorithm by factorizing a large set of candidate operators into smaller subspaces. As a practical example, this allows us to search for effective activation functions along with the regular operators including convolution, pooling, skip-connect, etc. With a small increase in search costs and no extra costs in re-training, we find interesting architectures that were not explored before, and achieve state-of-the-art performance on CIFAR10 and ImageNet, two standard image classification benchmarks.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here