Model Poisoning
24 papers with code • 3 benchmarks • 3 datasets
Most implemented papers
Ditto: Fair and Robust Federated Learning Through Personalization
Fairness and robustness are two important concerns for federated learning systems.
How To Backdoor Federated Learning
An attacker selected in a single round of federated learning can cause the global model to immediately reach 100% accuracy on the backdoor task.
Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering
To this end, previous work either makes use of auxiliary data at parameter server to verify the received gradients (e. g., by computing validation error rate) or leverages statistic-based methods (e. g. median and Krum) to identify and remove malicious gradients from Byzantine clients.
Mitigating Sybils in Federated Learning Poisoning
Unfortunately, such approaches are susceptible to a variety of attacks, including model poisoning, which is made substantially worse in the presence of sybils.
Analyzing Federated Learning through an Adversarial Lens
Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server.
Robust Federated Learning with Attack-Adaptive Aggregation
To the best of our knowledge, our aggregation strategy is the first one that can be adapted to defend against various attacks in a data-driven fashion.
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning
While recent works have indicated that federated learning (FL) may be vulnerable to poisoning attacks by compromised clients, their real impact on production FL systems is not fully understood.
On the Security Risks of AutoML
Neural Architecture Search (NAS) represents an emerging machine learning (ML) paradigm that automatically searches for models tailored to given tasks, which greatly simplifies the development of ML systems and propels the trend of ML democratization.
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective
Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC.
ARFED: Attack-Resistant Federated averaging based on outlier elimination
ARFED mainly considers the outlier status of participant updates for each layer of the model architecture based on the distance to the global model.