no code implementations • 2 Apr 2024 • Alberto Blanco-Justicia, Najeeb Jebreel, Benet Manzanares, David Sánchez, Josep Domingo-Ferrer, Guillem Collell, Kuan Eeik Tan
The objective of digital forgetting is, given a model with undesirable knowledge or behavior, obtain a new model where the detected issues are no longer present.
no code implementations • 6 Nov 2023 • David Sánchez, Najeeb Jebreel, Josep Domingo-Ferrer, Krishnamurty Muralidhar, Alberto Blanco-Justicia
The alleged threat of reconstruction attacks has led the U. S. Census Bureau (USCB) to replace in the Decennial Census 2020 the traditional statistical disclosure limitation based on rank swapping with one based on differential privacy (DP).
no code implementations • 3 Nov 2022 • Emily Jefferson, James Liley, Maeve Malone, Smarti Reel, Alba Crespi-Boixader, Xaroula Kerasidou, Francesco Tava, Andrew McCarthy, Richard Preen, Alberto Blanco-Justicia, Esma Mansouri-Benssassi, Josep Domingo-Ferrer, Jillian Beggs, Antony Chuter, Christian Cole, Felix Ritchie, Angela Daly, Simon Rogers, Jim Smith
This is a complex topic, and it is unreasonable to expect all TREs to be aware of all risks or that TRE researchers have addressed these risks in AI-specific training.
1 code implementation • 13 Jul 2022 • Najeeb Moharram Jebreel, Josep Domingo-Ferrer, Alberto Blanco-Justicia, David Sanchez
To tackle the accuracy-privacy-security conflict, we propose {\em fragmented federated learning} (FFL), in which participants randomly exchange and mix fragments of their updates before sending them to the server.
no code implementations • 5 Jul 2022 • Najeeb Moharram Jebreel, Josep Domingo-Ferrer, David Sánchez, Alberto Blanco-Justicia
The label-flipping (LF) attack is a targeted poisoning attack where the attackers poison their training data by flipping the labels of some examples from one class (i. e., the source class) to another (i. e., the target class).
1 code implementation • 9 Jun 2022 • Alberto Blanco-Justicia, David Sanchez, Josep Domingo-Ferrer, Krishnamurty Muralidhar
We review the use of differential privacy (DP) for privacy protection in machine learning (ML).
no code implementations • 4 Aug 2021 • Josep Domingo-Ferrer, Alberto Blanco-Justicia, Jesús Manjón, David Sánchez
In this paper we build a federated learning framework that offers privacy to the participating peers as well as security against Byzantine and poisoning attacks.
no code implementations • 12 Dec 2020 • Alberto Blanco-Justicia, Josep Domingo-Ferrer, Sergio Martínez, David Sánchez, Adrian Flanagan, Kuan Eeik Tan
In contrast with centralized ML approaches, FL saves computation to the server and does not require the clients to outsource their private data to the server.