Search Results for author: Zeyuan Yin

Found 6 papers, 5 papers with code

Self-supervised Dataset Distillation: A Good Compression Is All You Need

2 code implementations11 Apr 2024 Muxin Zhou, Zeyuan Yin, Shitong Shao, Zhiqiang Shen

In this work, we consider addressing this task through the new lens of model informativeness in the compression stage on the original dataset pretraining.

Informativeness

Dataset Distillation in Large Data Era

1 code implementation30 Nov 2023 Zeyuan Yin, Zhiqiang Shen

Dataset distillation aims to generate a smaller but representative subset from a large dataset, which allows a model to be trained efficiently, meanwhile evaluating on the original testing data distribution to achieve decent performance.

2k

Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching

1 code implementation29 Nov 2023 Shitong Shao, Zeyuan Yin, Muxin Zhou, Xindong Zhang, Zhiqiang Shen

We call this perspective "generalized matching" and propose Generalized Various Backbone and Statistical Matching (G-VBSM) in this work, which aims to create a synthetic dataset with densities, ensuring consistency with the complete dataset across various backbones, layers, and statistics.

Dataset Condensation

Rethinking Mixup for Improving the Adversarial Transferability

no code implementations28 Nov 2023 Xiaosen Wang, Zeyuan Yin

In this work, we posit that the adversarial examples located at the convergence of decision boundaries across various categories exhibit better transferability and identify that Admix tends to steer the adversarial examples towards such regions.

Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective

1 code implementation NeurIPS 2023 Zeyuan Yin, Eric Xing, Zhiqiang Shen

The proposed method demonstrates flexibility across diverse dataset scales and exhibits multiple advantages in terms of arbitrary resolutions of synthesized images, low training cost and memory consumption with high-resolution synthesis, and the ability to scale up to arbitrary evaluation network architectures.

4k Bilevel Optimization +1

Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis

1 code implementation22 Sep 2021 Zeyuan Yin, Ye Yuan, Panfeng Guo, Pan Zhou

Edge devices in federated learning usually have much more limited computation and communication resources compared to servers in a data center.

Backdoor Attack Federated Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.