Search Results for author: Junyu Shi

Found 2 papers, 1 papers with code

Why Does Little Robustness Help? Understanding and Improving Adversarial Transferability from Surrogate Training

1 code implementation15 Jul 2023 Yechao Zhang, Shengshan Hu, Leo Yu Zhang, Junyu Shi, Minghui Li, Xiaogeng Liu, Wei Wan, Hai Jin

Building on these insights, we explore the impacts of data augmentation and gradient regularization on transferability and identify that the trade-off generally exists in the various training mechanisms, thus building a comprehensive blueprint for the regulation mechanism behind transferability.

Attribute Data Augmentation

Challenges and Approaches for Mitigating Byzantine Attacks in Federated Learning

no code implementations29 Dec 2021 Junyu Shi, Wei Wan, Shengshan Hu, Jianrong Lu, Leo Yu Zhang

Then we propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.