1 code implementation • 29 Sep 2024 • Dovydas Joksas, Luis Muñoz-González, Emil Lupu, Adnan Mehonic
Neural networks are now deployed in a wide number of areas from object classification to natural language systems.
no code implementations • 2 Jun 2023 • Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu
We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters and models the attack as a multiobjective bilevel optimization problem.
no code implementations • 2 Dec 2021 • Stefán Páll Sturluson, Samuel Trew, Luis Muñoz-González, Matei Grama, Jonathan Passerat-Palmbach, Daniel Rueckert, Amir Alansary
The robustness of federated learning (FL) is vital for the distributed training of an accurate global model that is shared among large number of clients.
no code implementations • 23 May 2021 • Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu
Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance.
no code implementations • 16 May 2021 • Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Emil C. Lupu
Universal Adversarial Perturbations (UAPs) are a prominent class of adversarial examples that exploit the systemic vulnerabilities and enable physically realizable and robust attacks against Deep Neural Networks (DNNs).
no code implementations • 12 Feb 2021 • Raphael Labaca-Castro, Luis Muñoz-González, Feargus Pendlebury, Gabi Dreo Rodosek, Fabio Pierazzi, Lorenzo Cavallaro
Universal Adversarial Perturbations (UAPs), which identify noisy patterns that generalize across the input space, allow the attacker to greatly scale up the generation of such examples.
1 code implementation • 10 Dec 2020 • Alberto G. Matachana, Kenneth T. Co, Luis Muñoz-González, David Martinez, Emil C. Lupu
In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization.
no code implementations • 17 Sep 2020 • Matei Grama, Maria Musat, Luis Muñoz-González, Jonathan Passerat-Palmbach, Daniel Rueckert, Amir Alansary
In this work, we implement and evaluate different robust aggregation methods in FL applied to healthcare data.
Cryptography and Security
no code implementations • 28 Feb 2020 • Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu
We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters by modelling the attack as a multiobjective bilevel optimisation problem.
1 code implementation • 23 Nov 2019 • Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, Emil C. Lupu
Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise.
no code implementations • 11 Sep 2019 • Luis Muñoz-González, Kenneth T. Co, Emil C. Lupu
Federated learning enables training collaborative machine learning models at scale with many participants whilst preserving the privacy of their datasets.
1 code implementation • 18 Jun 2019 • Luis Muñoz-González, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, Emil C. Lupu
In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i. e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.
1 code implementation • ICML Workshop Deep_Phenomen 2019 • Kenneth T. Co, Luis Muñoz-González, Emil C. Lupu
Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset.
2 code implementations • 30 Sep 2018 • Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, Emil C. Lupu
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time.
no code implementations • 16 Aug 2018 • Ziyi Bao, Luis Muñoz-González, Emil C. Lupu
We propose a design methodology to evaluate the security of machine learning classifiers with embedded feature selection against adversarial examples crafted using different attack strategies.
no code implementations • 2 Mar 2018 • Andrea Paudice, Luis Muñoz-González, Emil C. Lupu
Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points.
1 code implementation • 8 Feb 2018 • Andrea Paudice, Luis Muñoz-González, Andras Gyorgy, Emil C. Lupu
We show empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack.
no code implementations • 29 Aug 2017 • Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli
This exposes learning algorithms to the threat of data poisoning, i. e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process.
no code implementations • 22 Jun 2016 • Luis Muñoz-González, Daniele Sgandurra, Andrea Paudice, Emil C. Lupu
We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages of approximate inference techniques to scale to larger attack graphs.
no code implementations • 8 Oct 2015 • Luis Muñoz-González, Daniele Sgandurra, Martín Barrère, Emil Lupu
Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise network resources.