ConCoDE: Hard-constrained Differentiable Co-Exploration Method for Neural Architectures and Hardware Accelerators

29 Sep 2021  ·  Deokki Hong, Kanghyun Choi, Hey Yoon Lee, Joonsang Yu, Youngsok Kim, Noseong Park, Jinho Lee ·

While DNNs achieve over-human performances in a number of areas, it is often accompanied by the skyrocketing computational costs. Co-exploration of an optimal neural architecture and its hardware accelerator is an approach of rising interest which addresses the computational cost problem, especially in low-profile systems (e.g., embedded, mobile). The difficulty of having to search the large co-exploration space is often addressed by adopting the idea of differentiable neural architecture search. Despite the superior search efficiency of the differentiable co-exploration, it faces a critical challenge of not being able to systematically satisfy hard constraints, such as frame rate or power budget. To handle the hard constraint problem of differentiable co-exploration, we propose ConCoDE, which searches for hard-constrained solutions without compromising the global design objectives. By manipulating the gradients in the interest of the given hard constraint, high-quality solutions satisfying the constraint can be obtained. Experimental results show that ConCoDE is able to meet the constraints even in tight conditions. We also show that the solutions searched by ConCoDE exhibit high quality compared to those searched without any constraint.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here