Search Results for author: Junyi Zhu

Found 8 papers, 6 papers with code

Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better

1 code implementation2 Apr 2024 Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Sergey Yekhanin, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang

For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10.

Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning

1 code implementation31 May 2023 Junyi Zhu, Ruicong Yao, Matthew B. Blaschko

Seemingly, FL can provide a degree of protection against gradient inversion attacks on weight updates, since the gradient of a single step is concealed by the accumulation of gradients over multiple local iterations.

Federated Learning

Confidence-aware Personalized Federated Learning via Variational Expectation Maximization

1 code implementation CVPR 2023 Junyi Zhu, Xingchen Ma, Matthew B. Blaschko

A global model is introduced as a latent variable to augment the joint distribution of clients' parameters and capture the common trends of different clients, optimization is derived based on the principle of maximizing the marginal likelihood and conducted using variational expectation maximization.

Personalized Federated Learning Variational Inference

Advancing Example Exploitation Can Alleviate Critical Challenges in Adversarial Training

1 code implementation ICCV 2023 Yao Ge, Yun Li, Keji Han, Junyi Zhu, Xianzhong Long

However, they are susceptible to adversarial examples, which are generated by adding adversarial perturbations to original data.

Improving Differentially Private SGD via Randomly Sparsified Gradients

1 code implementation1 Dec 2021 Junyi Zhu, Matthew B. Blaschko

Differentially private stochastic gradient descent (DP-SGD) has been widely adopted in deep learning to provide rigorously defined privacy, which requires gradient clipping to bound the maximum norm of individual gradients and additive isotropic Gaussian noise.

Federated Learning

R-GAP: Recursive Gradient Attack on Privacy

2 code implementations ICLR 2021 Junyi Zhu, Matthew Blaschko

However, recent optimization-based gradient attacks show that raw data can often be accurately recovered from gradients.

Federated Learning

Localization in Aerial Imagery with Grid Maps using LocGAN

no code implementations4 Jun 2019 Haohao Hu, Junyi Zhu, Sascha Wirges, Martin Lauer

In this work, we present LocGAN, our localization approach based on a geo-referenced aerial imagery and LiDAR grid maps.

Cannot find the paper you are looking for? You can Submit a new open access paper.