Dataset Distillation - 1IPC

11 papers with code • 4 benchmarks • 3 datasets

Dataset distillation aims to compress a dataset into a much smaller one so that a model trained on the distilled dataset achieves high accuracy. Concretely, for 1-IPC, it is framed as maximizing the distilled classification accuracy for a budget of 1 distilled images-per-class.

Libraries

Use these libraries to find Dataset Distillation - 1IPC models and implementations

Most implemented papers

Dataset Condensation with Gradient Matching

VICO-UoE/DatasetCondensation ICLR 2021

As the state-of-the-art machine learning methods in many fields rely on larger datasets, storing datasets and training models on them become significantly more expensive.

Dataset Distillation by Matching Training Trajectories

georgecazenavette/mtt-distillation CVPR 2022

To efficiently obtain the initial and target network parameters for large-scale datasets, we pre-compute and store training trajectories of expert networks trained on the real dataset.

Dataset Condensation with Distribution Matching

VICO-UoE/DatasetCondensation 8 Oct 2021

Computational cost of training state-of-the-art deep models in many learning problems is rapidly increasing due to more sophisticated models and larger datasets.

Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

angusdujw/ftd-distillation CVPR 2023

To mitigate the adverse impact of this accumulated trajectory error, we propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.

Dataset Condensation with Differentiable Siamese Augmentation

VICO-UoE/DatasetCondensation 16 Feb 2021

In many machine learning problems, large-scale datasets have become the de-facto standard to train state-of-the-art deep networks at the price of heavy computation load.

Dataset Distillation using Neural Feature Regression

yongchao97/FRePo 1 Jun 2022

Dataset distillation can be formulated as a bi-level meta-learning problem where the outer loop optimizes the meta-dataset and the inner loop trains a model on the distilled data.

Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks

princetonvisualai/rememberthepast-datasetdistillation 6 Jun 2022

We propose an algorithm that compresses the critical information of a large dataset into compact addressable memories.

Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory

justincui03/tesla 19 Nov 2022

The resulting algorithm sets new SOTA on ImageNet-1K: we can scale up to 50 IPCs (Image Per Class) on ImageNet-1K on a single GPU (all previous methods can only scale to 2 IPCs on ImageNet-1K), leading to the best accuracy (only 5. 9% accuracy drop against full dataset training) while utilizing only 4. 2% of the number of data points - an 18. 2% absolute gain over prior SOTA.

Dataset Distillation with Convexified Implicit Gradients

yolky/rcig 13 Feb 2023

We propose a new dataset distillation algorithm using reparameterization and convexification of implicit gradients (RCIG), that substantially improves the state-of-the-art.

Embarassingly Simple Dataset Distillation

fengyzpku/simple_dataset_distillation 13 Nov 2023

Re-examining the foundational back-propagation through time method, we study the pronounced variance in the gradients, computational burden, and long-term dependencies.