no code implementations • 30 Sep 2022 • Vladimir Li, Atsuto Maki
In this paper, by explicitly modularising knowledge distillation into a framework of three components, i. e. affinity, normalisation, and loss, we give a unified treatment of these algorithms as well as study a number of unexplored combinations of the modules.
no code implementations • 2 Oct 2018 • Yang Zhong, Vladimir Li, Ryuzo Okada, Atsuto Maki
This paper presents an automatic network adaptation method that finds a ConvNet structure well-suited to a given target task, e. g., image classification, for efficiency as well as accuracy in transfer learning.