Dual Discriminator Adversarial Distillation for Data-free Model Compression

12 Apr 2021 Haoran Zhao Xin Sun Junyu Dong Hui Yu Huiyu Zhou

Knowledge distillation has been widely used to produce portable and efficient neural networks which can be well applied on edge devices for computer vision tasks. However, almost all top-performing knowledge distillation methods need to access the original training data, which usually has a huge size and is often unavailable... (read more)

PDF Abstract
No code implementations yet. Submit your code now


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper

Batch Normalization