2 code implementations • 11 Apr 2024 • Muxin Zhou, Zeyuan Yin, Shitong Shao, Zhiqiang Shen
In this work, we consider addressing this task through the new lens of model informativeness in the compression stage on the original dataset pretraining.
1 code implementation • 30 Nov 2023 • Zeyuan Yin, Zhiqiang Shen
Dataset distillation aims to generate a smaller but representative subset from a large dataset, which allows a model to be trained efficiently, meanwhile evaluating on the original testing data distribution to achieve decent performance.
1 code implementation • 29 Nov 2023 • Shitong Shao, Zeyuan Yin, Muxin Zhou, Xindong Zhang, Zhiqiang Shen
We call this perspective "generalized matching" and propose Generalized Various Backbone and Statistical Matching (G-VBSM) in this work, which aims to create a synthetic dataset with densities, ensuring consistency with the complete dataset across various backbones, layers, and statistics.
no code implementations • 28 Nov 2023 • Xiaosen Wang, Zeyuan Yin
In this work, we posit that the adversarial examples located at the convergence of decision boundaries across various categories exhibit better transferability and identify that Admix tends to steer the adversarial examples towards such regions.
1 code implementation • NeurIPS 2023 • Zeyuan Yin, Eric Xing, Zhiqiang Shen
The proposed method demonstrates flexibility across diverse dataset scales and exhibits multiple advantages in terms of arbitrary resolutions of synthesized images, low training cost and memory consumption with high-resolution synthesis, and the ability to scale up to arbitrary evaluation network architectures.
1 code implementation • 22 Sep 2021 • Zeyuan Yin, Ye Yuan, Panfeng Guo, Pan Zhou
Edge devices in federated learning usually have much more limited computation and communication resources compared to servers in a data center.