Neural Architecture Search

583 papers with code • 20 benchmarks • 25 datasets

Neural architecture search (NAS) is a technique for automating the design of artificial neural networks (ANN), a widely used model in the field of machine learning. NAS essentially takes the process of a human manually tweaking a neural network and learning what works well, and automates this task to discover more complex architectures.

Image Credit : NAS with Reinforcement Learning


Use these libraries to find Neural Architecture Search models and implementations
10 papers
6 papers
See all 21 libraries.

Most implemented papers

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

tensorflow/tpu ICML 2019

Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available.

DARTS: Differentiable Architecture Search

quark0/darts ICLR 2019

This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner.

Searching for MobileNetV3

tensorflow/models ICCV 2019

We achieve new state of the art results for mobile classification, detection and segmentation.

Efficient Neural Architecture Search via Parameter Sharing

google-research/google-research 9 Feb 2018

The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set.

MnasNet: Platform-Aware Neural Architecture Search for Mobile

tensorflow/tpu CVPR 2019

In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency.

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware

MIT-HAN-LAB/ProxylessNAS ICLR 2019

We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set.

EfficientNetV2: Smaller Models and Faster Training

google/automl 1 Apr 2021

By pretraining on the same ImageNet21k, our EfficientNetV2 achieves 87. 3% top-1 accuracy on ImageNet ILSVRC2012, outperforming the recent ViT by 2. 0% accuracy while training 5x-11x faster using the same computing resources.

Auto-Keras: An Efficient Neural Architecture Search System

keras-team/autokeras 27 Jun 2018

In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search.

Progressive Neural Architecture Search

tensorflow/models ECCV 2018

We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms.

AMC: AutoML for Model Compression and Acceleration on Mobile Devices

mit-han-lab/amc ECCV 2018

Model compression is a critical technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets.