|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
To satisfy the stringent requirements on computational resources in the field of real-time semantic segmentation, most approaches focus on the hand-crafted design of light-weight segmentation networks.
To alleviate the performance disturbance issue, we propose a new disturbance-immune update strategy for model updating.
Neural architecture search (NAS) is a promising method for automatically finding excellent architectures. Commonly used search strategies such as evolutionary algorithm, Bayesian optimization, and Predictor method employs a predictor to rank sampled architectures.
3D Convolution Neural Networks (CNNs) have been widely applied to 3D scene understanding, such as video analysis and volumetric image recognition.
One common way is searching on a smaller proxy dataset (e. g., CIFAR-10) and then transferring to the target task (e. g., ImageNet).
Existing neural network architectures in computer vision --- whether designed by humans or by machines --- were typically found using both images and their associated labels.
Neural Architecture Search (NAS) has shown great potentials in automatically designing scalable network architectures for dense image predictions.
SOTA for Semantic Segmentation on ADE20K val
The training efficiency is thus boosted since the training space has been greedily shrunk from all paths to those potentially-good ones.
Without extra retraining or post-processing steps, we are able to train a single set of shared weights on ImageNet and use these weights to obtain child models whose sizes range from 200 to 1000 MFLOPs.