no code implementations • 12 Oct 2021 • Jingtao Rong, Xinyi Yu, Mingyang Zhang, Linlin Ou
In this paper, an across-task neural architecture search (AT-NAS) is proposed to address the problem through combining gradient-based meta-learning with EA-based NAS to learn over the distribution of tasks.
1 code implementation • 8 Sep 2021 • Mingyang Zhang, Xinyi Yu, Jingtao Rong, Linlin Ou
However, it is still challenging to search for efficient networks due to the gap between the searched constraint and real inference time exists.
no code implementations • 10 Nov 2020 • Mingyang Zhang, Xinyi Yu, Jingtao Rong, Linlin Ou
To overcome the unfull training, a stage-wise pruning(SWP) method is proposed, which splits a deep supernet into several stage-wise supernets to reduce the candidate number and utilize inplace distillation to supervise the stage training.
no code implementations • 22 Nov 2019 • Mingyang Zhang, Xinyi Yu, Jingtao Rong, Linlin Ou
Different from previous work, we take the node features from a well-trained graph aggregator instead of the hand-craft features, as the states in reinforcement learning.