Search Results for author: Kanya Mo

Found 2 papers, 1 papers with code

Up to 100$\times$ Faster Data-free Knowledge Distillation

2 code implementations12 Dec 2021 Gongfan Fang, Kanya Mo, Xinchao Wang, Jie Song, Shitao Bei, Haofei Zhang, Mingli Song

At the heart of our approach is a novel strategy to reuse the shared common features in training data so as to synthesize different data instances.

Data-free Knowledge Distillation

Exploiting Spline Models for the Training of Fully Connected Layers in Neural Network

no code implementations12 Feb 2021 Kanya Mo, Shen Zheng, Xiwei Wang, Jinghua Wang, Klaus-Dieter Schewe

The fully connected (FC) layer, one of the most fundamental modules in artificial neural networks (ANN), is often considered difficult and inefficient to train due to issues including the risk of overfitting caused by its large amount of parameters.

Cannot find the paper you are looking for? You can Submit a new open access paper.