Data-free Knowledge Distillation

25 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Data-Free Knowledge Distillation for Heterogeneous Federated Learning

zhuangdizhu/FedGen 20 May 2021

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data.

Contrastive Model Inversion for Data-Free Knowledge Distillation

zju-vipa/DataFree 18 May 2021

In this paper, we propose Contrastive Model Inversion~(CMI), where the data diversity is explicitly modeled as an optimizable objective, to alleviate the mode collapse issue.

Data-Free Knowledge Distillation for Deep Neural Networks

huawei-noah/DAFL 19 Oct 2017

Recent advances in model compression have provided procedures for compressing large neural networks to a fraction of their original size while retaining most if not all of their accuracy.

Up to 100$\times$ Faster Data-free Knowledge Distillation

zju-vipa/Fast-Datafree 12 Dec 2021

At the heart of our approach is a novel strategy to reuse the shared common features in training data so as to synthesize different data instances.

DAD++: Improved Data-free Test Time Adversarial Defense

vcl-iisc/data-free-defense-at-test-time 10 Sep 2023

With the increasing deployment of deep neural networks in safety-critical applications such as self-driving cars, medical imaging, anomaly detection, etc., adversarial robustness has become a crucial concern in the reliability of these networks in real-world scenarios.

Knowledge Extraction with No Observable Data

snudatalab/KegNet NeurIPS 2019

Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small.

MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation

sanjaykariyappa/maze CVPR 2021

The effectiveness of such attacks relies heavily on the availability of data necessary to query the target model.

Robustness and Diversity Seeking Data-Free Knowledge Distillation

PengchaoHan/RDSKD 7 Nov 2020

Knowledge distillation (KD) has enabled remarkable progress in model compression and knowledge transfer.

Training Generative Adversarial Networks in One Stage

zju-vipa/OSGAN CVPR 2021

Based on the adversarial losses of the generator and discriminator, we categorize GANs into two classes, Symmetric GANs and Asymmetric GANs, and introduce a novel gradient decomposition method to unify the two, allowing us to train both classes in one stage and hence alleviate the training effort.

Towards Data-Free Domain Generalization

HaokunChen245/DFDG 9 Oct 2021

In particular, we address the question: How can knowledge contained in models trained on different source domains be merged into a single model that generalizes well to unseen target domains, in the absence of source and target domain data?