Neural Architecture Search

774 papers with code • 26 benchmarks • 27 datasets

Neural architecture search (NAS) is a technique for automating the design of artificial neural networks (ANN), a widely used model in the field of machine learning. NAS essentially takes the process of a human manually tweaking a neural network and learning what works well, and automates this task to discover more complex architectures.

Image Credit : NAS with Reinforcement Learning

Libraries

Use these libraries to find Neural Architecture Search models and implementations
10 papers
2,917
6 papers
1,546
See all 24 libraries.

Most implemented papers

Auto-Keras: An Efficient Neural Architecture Search System

keras-team/autokeras 27 Jun 2018

In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search.

SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization

tensorflow/models CVPR 2020

We propose SpineNet, a backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by Neural Architecture Search.

Learning Efficient Convolutional Networks through Network Slimming

Eric-mingjie/network-slimming ICCV 2017

For VGGNet, a multi-pass version of network slimming gives a 20x reduction in model size and a 5x reduction in computing operations.

AMC: AutoML for Model Compression and Acceleration on Mobile Devices

mit-han-lab/amc ECCV 2018

Model compression is a critical technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets.

Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation

tensorflow/models CVPR 2019

Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space.

Neural Architecture Search with Reinforcement Learning

tensorflow/models 5 Nov 2016

Our cell achieves a test set perplexity of 62. 4 on the Penn Treebank, which is 3. 6 perplexity better than the previous state-of-the-art model.

BAM: Bottleneck Attention Module

xmu-xiaoma666/External-Attention-pytorch 17 Jul 2018

In this work, we focus on the effect of attention in general deep neural networks.

AutoSlim: Towards One-Shot Architecture Search for Channel Numbers

JiahuiYu/slimmable_networks ICLR 2020

Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74. 2% top-1 accuracy, 2. 4% better than default MobileNet-v2 (301M FLOPs), and even 0. 2% better than RL-searched MNasNet (317M FLOPs).

Once-for-All: Train One Network and Specialize it for Efficient Deployment

mit-han-lab/once-for-all 26 Aug 2019

On diverse edge devices, OFA consistently outperforms state-of-the-art (SOTA) NAS methods (up to 4. 0% ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but 1. 5x faster than MobileNetV3, 2. 6x faster than EfficientNet w. r. t measured latency) while reducing many orders of magnitude GPU hours and $CO_2$ emission.

Exploring Randomly Wired Neural Networks for Image Recognition

facebookresearch/pycls ICCV 2019

In this paper, we explore a more diverse set of connectivity patterns through the lens of randomly wired neural networks.