Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells

Automated design of neural network architectures tailored for a specific task is an extremely promising, albeit inherently difficult, avenue to explore. While most results in this domain have been achieved on image classification and language modelling problems, here we concentrate on dense per-pixel tasks, in particular, semantic image segmentation using fully convolutional networks. In contrast to the aforementioned areas, the design choices of a fully convolutional network require several changes, ranging from the sort of operations that need to be used---e.g., dilated convolutions---to a solving of a more difficult optimisation problem. In this work, we are particularly interested in searching for high-performance compact segmentation architectures, able to run in real-time using limited resources. To achieve that, we intentionally over-parameterise the architecture during the training time via a set of auxiliary cells that provide an intermediate supervisory signal and can be omitted during the evaluation phase. The design of the auxiliary cell is emitted by a controller, a neural network with the fixed structure trained using reinforcement learning. More crucially, we demonstrate how to efficiently search for these architectures within limited time and computational budgets. In particular, we rely on a progressive strategy that terminates non-promising architectures from being further trained, and on Polyak averaging coupled with knowledge distillation to speed-up the convergence. Quantitatively, in 8 GPU-days our approach discovers a set of architectures performing on-par with state-of-the-art among compact models on the semantic segmentation, pose estimation and depth prediction tasks. Code will be made available here: https://github.com/drsleep/nas-segm-pytorch

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Depth Estimation NYU-Depth V2 FastDenseNas-arch1 RMSE 0.526 # 59
Monocular Depth Estimation NYU-Depth V2 FastDenseNas-arch2 RMSE 0.525 # 58
Monocular Depth Estimation NYU-Depth V2 FastDenseNas-arch0 RMSE 0.523 # 57
Semantic Segmentation PASCAL VOC 2012 val FastDenseNas-arch2 mIoU 77.3% # 16
Semantic Segmentation PASCAL VOC 2012 val FastDenseNas-arch1 mIoU 77.1% # 18
Semantic Segmentation PASCAL VOC 2012 val FastDenseNas-arch0 mIoU 78.0% # 13

Methods