Neural Architecture Search
774 papers with code • 26 benchmarks • 27 datasets
Neural architecture search (NAS) is a technique for automating the design of artificial neural networks (ANN), a widely used model in the field of machine learning. NAS essentially takes the process of a human manually tweaking a neural network and learning what works well, and automates this task to discover more complex architectures.
Image Credit : NAS with Reinforcement Learning
Libraries
Use these libraries to find Neural Architecture Search models and implementationsDatasets
Most implemented papers
Auto-Keras: An Efficient Neural Architecture Search System
In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search.
SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization
We propose SpineNet, a backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by Neural Architecture Search.
Learning Efficient Convolutional Networks through Network Slimming
For VGGNet, a multi-pass version of network slimming gives a 20x reduction in model size and a 5x reduction in computing operations.
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Model compression is a critical technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets.
Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation
Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space.
Neural Architecture Search with Reinforcement Learning
Our cell achieves a test set perplexity of 62. 4 on the Penn Treebank, which is 3. 6 perplexity better than the previous state-of-the-art model.
BAM: Bottleneck Attention Module
In this work, we focus on the effect of attention in general deep neural networks.
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74. 2% top-1 accuracy, 2. 4% better than default MobileNet-v2 (301M FLOPs), and even 0. 2% better than RL-searched MNasNet (317M FLOPs).
Once-for-All: Train One Network and Specialize it for Efficient Deployment
On diverse edge devices, OFA consistently outperforms state-of-the-art (SOTA) NAS methods (up to 4. 0% ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but 1. 5x faster than MobileNetV3, 2. 6x faster than EfficientNet w. r. t measured latency) while reducing many orders of magnitude GPU hours and $CO_2$ emission.
Exploring Randomly Wired Neural Networks for Image Recognition
In this paper, we explore a more diverse set of connectivity patterns through the lens of randomly wired neural networks.