Neural Architecture Search

Progressive Neural Architecture Search

Introduced by Liu et al. in Progressive Neural Architecture Search

Progressive Neural Architecture Search, or PNAS, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to complex ones, pruning out unpromising structures as we go.

At iteration $b$ of the algorithm, we have a set of $K$ candidate cells (each of size $b$ blocks), which we train and evaluate on a dataset of interest. Since this process is expensive, PNAS also learns a model or surrogate function which can predict the performance of a structure without needing to train it. We then expand the $K$ candidates of size $b$ into $K' \gg K$ children, each of size $b+1$. The surrogate function is used to rank all of the $K'$ children, pick the top $K$, and then train and evaluate them. We continue in this way until $b=B$, which is the maximum number of blocks we want to use in a cell.

Source: Progressive Neural Architecture Search

Papers


Paper Code Results Date Stars

Categories