1 code implementation • 24 Feb 2023 • Najeeb Moharram Jebreel, Josep Domingo-Ferrer, Yiming Li
We find out that the feature difference between benign and poisoned samples tends to be maximum at a critical layer, which is not always the one typically used in existing defenses, namely the layer before fully-connected layers.
1 code implementation • 13 Jul 2022 • Najeeb Moharram Jebreel, Josep Domingo-Ferrer, Alberto Blanco-Justicia, David Sanchez
To tackle the accuracy-privacy-security conflict, we propose {\em fragmented federated learning} (FFL), in which participants randomly exchange and mix fragments of their updates before sending them to the server.
no code implementations • 5 Jul 2022 • Najeeb Moharram Jebreel, Josep Domingo-Ferrer, David Sánchez, Alberto Blanco-Justicia
The label-flipping (LF) attack is a targeted poisoning attack where the attackers poison their training data by flipping the labels of some examples from one class (i. e., the source class) to another (i. e., the target class).