Model Poisoning

21 papers with code • 3 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Ditto: Fair and Robust Federated Learning Through Personalization

litian96/ditto 8 Dec 2020

Fairness and robustness are two important concerns for federated learning systems.

How To Backdoor Federated Learning

ebagdasa/backdoor_federated_learning 2 Jul 2018

An attacker selected in a single round of federated learning can cause the global model to immediately reach 100% accuracy on the backdoor task.

Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering

jianxu95/signguard 13 Sep 2021

To this end, previous work either makes use of auxiliary data at parameter server to verify the received gradients (e. g., by computing validation error rate) or leverages statistic-based methods (e. g. median and Krum) to identify and remove malicious gradients from Byzantine clients.

Mitigating Sybils in Federated Learning Poisoning

DistributedML/FoolsGold 14 Aug 2018

Unfortunately, such approaches are susceptible to a variety of attacks, including model poisoning, which is made substantially worse in the presence of sybils.

Analyzing Federated Learning through an Adversarial Lens

inspire-group/ModelPoisoning ICLR 2019

Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server.

Robust Federated Learning with Attack-Adaptive Aggregation

cpwan/Attack-Adaptive-Aggregation 10 Feb 2021

To the best of our knowledge, our aggregation strategy is the first one that can be adapted to defend against various attacks in a data-driven fashion.

Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning

vrt1shjwlkr/ndss21-model-poisoning 23 Aug 2021

While recent works have indicated that federated learning (FL) may be vulnerable to poisoning attacks by compromised clients, their real impact on production FL systems is not fully understood.

On the Security Risks of AutoML

ain-soph/autovul 12 Oct 2021

Neural Architecture Search (NAS) represents an emerging machine learning (ML) paradigm that automatically searches for models tailored to given tasks, which greatly simplifies the development of ML systems and propels the trend of ML democratization.

FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective

jeremy313/fl-wbc NeurIPS 2021

Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC.

ARFED: Attack-Resistant Federated averaging based on outlier elimination

eceisik/ARFED 8 Nov 2021

ARFED mainly considers the outlier status of participant updates for each layer of the model architecture based on the distance to the global model.