|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms.
In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, named "NASNet architecture".
#7 best model for Image Classification on ImageNet
We present MorphNet, an approach to automate the design of neural network structures.
Recent works have highlighted the strength of the Transformer architecture on sequence tasks while, at the same time, neural architecture search (NAS) has begun to outperform human-designed models.
#2 best model for Machine Translation on WMT2014 English-German
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner.
#15 best model for Language Modelling on Penn Treebank (Word Level)
The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set.
#6 best model for Architecture Search on CIFAR-10 Image Classification
Our cell achieves a test set perplexity of 62. 4 on the Penn Treebank, which is 3. 6 perplexity better than the previous state-of-the-art model.
#4 best model for Architecture Search on CIFAR-10 Image Classification
Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available.
SOTA for Image Classification on ImageNet