Search Results for author: Joshua Zhao

Found 1 papers, 0 papers with code

TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks

no code implementations19 Oct 2021 Atul Sharma, Wei Chen, Joshua Zhao, Qiang Qiu, Somali Chaterji, Saurabh Bagchi

The attack uses the intuition that simply by changing the sign of the gradient updates that the optimizer is computing, for a set of malicious clients, a model can be diverted from the optima to increase the test error rate.

Federated Learning Model Poisoning

Cannot find the paper you are looking for? You can Submit a new open access paper.