Progressive Neural Architecture Search, or PNAS, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to complex ones, pruning out unpromising structures as we go.
At iteration $b$ of the algorithm, we have a set of $K$ candidate cells (each of size $b$ blocks), which we train and evaluate on a dataset of interest. Since this process is expensive, PNAS also learns a model or surrogate function which can predict the performance of a structure without needing to train it. We then expand the $K$ candidates of size $b$ into $K' \gg K$ children, each of size $b+1$. The surrogate function is used to rank all of the $K'$ children, pick the top $K$, and then train and evaluate them. We continue in this way until $b=B$, which is the maximum number of blocks we want to use in a cell.
Source: Progressive Neural Architecture SearchPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 1 | 7.69% |
Large Language Model | 1 | 7.69% |
Recommendation Systems | 1 | 7.69% |
Retrieval | 1 | 7.69% |
Adversarial Attack | 1 | 7.69% |
Computed Tomography (CT) | 1 | 7.69% |
Image Reconstruction | 1 | 7.69% |
Bayesian Optimization | 1 | 7.69% |
Evolutionary Algorithms | 1 | 7.69% |