Tr-NAS: Memory-Efficient Neural Architecture Search with Transferred Blocks

29 Sep 2021  ·  Linh-Tam Tran, A F M Shahab Uddin, Sung-Ho Bae ·

Neural Architecture Search (NAS) is one of the most rapidly growing research fields in machine learning due to its ability to discover high-performance architectures automatically. Although conventional NAS algorithms focus on improving search efficiency (e.g., high performance with less search time), they often require a lot of memory footprint and power consumption. To remedy this problem, we propose a new paradigm for NAS that effectively reduces the use of memory while maintaining high performance. The proposed algorithm is motivated by our observation that manually designed and NAS-based architectures share similar low-level representations, regardless of the difference in the network's topology. Reflecting this, we propose a new architectural paradigm for NAS, called $\textbf{Transfer-NAS}$, that replaces several first cells in the generated architecture with conventional (hand-crafted) pre-trained blocks. As the replaced pre-trained blocks are kept frozen during training, the memory footprint can significantly be reduced. We demonstrate the effectiveness of the proposed method by incorporating it into Regularized Evolution and Differentiable ARchiTecture Search with Perturbation-based architecture selection (DARTS+PT) on NAS-Bench-201 and DARTS search spaces. Extensive experiments show that Transfer-NAS significantly decreases the memory usage up-to $\textbf{50\%}$ while achieving higher/comparable performance compared to the baselines. Furthermore, the proposed method is $\textbf{1.98$\times$}$ faster in terms of search time when incorporated to DARTS+PT on NAS-Bench-201 compared to the conventional method.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods