|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We propose to incorporate neural architecture search (NAS) into general-purpose multi-task learning (GP-MTL).
To remedy this, this paper proposes \mldas, a mixed-level reformulation for NAS that can be optimized efficiently and reliably.
To this end, we propose a hierarchical trinity search framework to simultaneously discover efficient architectures for all components (i. e. backbone, neck, and head) of object detector in an end-to-end manner.
To solve those problems, in this paper, we propose a new lifelong learning framework named Searchable Extension Units (SEU) by introducing Neural Architecture Search into lifelong learning, which breaks down the need for a predefined original model and searches for specific extension units for different tasks, without compromising the performance of the model on different tasks.
During search, we evaluate candidate blocks in different layers and construct the accuracy table that is to be used in deployment.
To this end, we design a hierarchical SR search space and propose a hierarchical controller for architecture search.
Here we propose a novel frame level FAS method based on Central Difference Convolution (CDC), which is able to capture intrinsic detailed patterns via aggregating both intensity and gradient information.
Neural architecture search (NAS) relies on a good controller to generate better architectures or predict the accuracy of given architectures.
#3 best model for Neural Architecture Search on ImageNet