Data-free Knowledge Distillation

37 papers with code • 2 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Data-Free Knowledge Distillation for Heterogeneous Federated Learning

zhuangdizhu/FedGen 20 May 2021

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data.

ZeroGen: Efficient Zero-shot Learning via Dataset Generation

HKUNLP/zerogen 16 Feb 2022

There is a growing interest in dataset generation recently due to the superior generative capacity of large pre-trained language models (PLMs).

Data-Free Knowledge Distillation for Deep Neural Networks

huawei-noah/DAFL 19 Oct 2017

Recent advances in model compression have provided procedures for compressing large neural networks to a fraction of their original size while retaining most if not all of their accuracy.

Contrastive Model Inversion for Data-Free Knowledge Distillation

zju-vipa/DataFree 18 May 2021

In this paper, we propose Contrastive Model Inversion~(CMI), where the data diversity is explicitly modeled as an optimizable objective, to alleviate the mode collapse issue.

Up to 100$\times$ Faster Data-free Knowledge Distillation

zju-vipa/Fast-Datafree 12 Dec 2021

At the heart of our approach is a novel strategy to reuse the shared common features in training data so as to synthesize different data instances.

ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback

hkunlp/progen 22 Oct 2022

To improve the quality of dataset synthesis, we propose a progressive zero-shot dataset generation framework, ProGen, which leverages the feedback from the task-specific model to guide the generation of new training data via in-context examples.

DAD++: Improved Data-free Test Time Adversarial Defense

vcl-iisc/data-free-defense-at-test-time 10 Sep 2023

With the increasing deployment of deep neural networks in safety-critical applications such as self-driving cars, medical imaging, anomaly detection, etc., adversarial robustness has become a crucial concern in the reliability of these networks in real-world scenarios.

Knowledge Extraction with No Observable Data

snudatalab/KegNet NeurIPS 2019

Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small.

MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation

sanjaykariyappa/maze CVPR 2021

The effectiveness of such attacks relies heavily on the availability of data necessary to query the target model.

Robustness and Diversity Seeking Data-Free Knowledge Distillation

PengchaoHan/RDSKD 7 Nov 2020

Knowledge distillation (KD) has enabled remarkable progress in model compression and knowledge transfer.