Search Results for author: Luis Muñoz-González

Found 19 papers, 6 papers with code

Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization

no code implementations2 Jun 2023 Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu

We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters and models the attack as a multiobjective bilevel optimization problem.

Bilevel Optimization Data Poisoning

FedRAD: Federated Robust Adaptive Distillation

no code implementations2 Dec 2021 Stefán Páll Sturluson, Samuel Trew, Luis Muñoz-González, Matei Grama, Jonathan Passerat-Palmbach, Daniel Rueckert, Amir Alansary

The robustness of federated learning (FL) is vital for the distributed training of an accurate global model that is shared among large number of clients.

Federated Learning Knowledge Distillation +1

Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters

no code implementations23 May 2021 Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu

Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance.

Bilevel Optimization regression

Real-time Detection of Practical Universal Adversarial Perturbations

no code implementations16 May 2021 Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Emil C. Lupu

Universal Adversarial Perturbations (UAPs) are a prominent class of adversarial examples that exploit the systemic vulnerabilities and enable physically realizable and robust attacks against Deep Neural Networks (DNNs).

Blocking Image Classification +2

Realizable Universal Adversarial Perturbations for Malware

no code implementations12 Feb 2021 Raphael Labaca-Castro, Luis Muñoz-González, Feargus Pendlebury, Gabi Dreo Rodosek, Fabio Pierazzi, Lorenzo Cavallaro

Universal Adversarial Perturbations (UAPs), which identify noisy patterns that generalize across the input space, allow the attacker to greatly scale up the generation of such examples.

Malware Classification

Robustness and Transferability of Universal Attacks on Compressed Models

1 code implementation10 Dec 2020 Alberto G. Matachana, Kenneth T. Co, Luis Muñoz-González, David Martinez, Emil C. Lupu

In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization.

Neural Network Compression Quantization

Robust Aggregation for Adaptive Privacy Preserving Federated Learning in Healthcare

no code implementations17 Sep 2020 Matei Grama, Maria Musat, Luis Muñoz-González, Jonathan Passerat-Palmbach, Daniel Rueckert, Amir Alansary

In this work, we implement and evaluate different robust aggregation methods in FL applied to healthcare data.

Cryptography and Security

Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

no code implementations28 Feb 2020 Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu

We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters by modelling the attack as a multiobjective bilevel optimisation problem.

Bilevel Optimization Data Poisoning +2

Universal Adversarial Robustness of Texture and Shape-Biased Models

1 code implementation23 Nov 2019 Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, Emil C. Lupu

Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise.

Adversarial Robustness Image Classification

Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

no code implementations11 Sep 2019 Luis Muñoz-González, Kenneth T. Co, Emil C. Lupu

Federated learning enables training collaborative machine learning models at scale with many participants whilst preserving the privacy of their datasets.

BIG-bench Machine Learning Federated Learning

Poisoning Attacks with Generative Adversarial Nets

1 code implementation18 Jun 2019 Luis Muñoz-González, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, Emil C. Lupu

In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i. e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.

BIG-bench Machine Learning Data Poisoning

Sensitivity of Deep Convolutional Networks to Gabor Noise

1 code implementation ICML Workshop Deep_Phenomen 2019 Kenneth T. Co, Luis Muñoz-González, Emil C. Lupu

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset.

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks

2 code implementations30 Sep 2018 Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, Emil C. Lupu

Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time.

Bayesian Optimization

Mitigation of Adversarial Attacks through Embedded Feature Selection

no code implementations16 Aug 2018 Ziyi Bao, Luis Muñoz-González, Emil C. Lupu

We propose a design methodology to evaluate the security of machine learning classifiers with embedded feature selection against adversarial examples crafted using different attack strategies.

BIG-bench Machine Learning feature selection

Label Sanitization against Label Flipping Poisoning Attacks

no code implementations2 Mar 2018 Andrea Paudice, Luis Muñoz-González, Emil C. Lupu

Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points.

Data Poisoning

Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

1 code implementation8 Feb 2018 Andrea Paudice, Luis Muñoz-González, Andras Gyorgy, Emil C. Lupu

We show empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack.

Anomaly Detection BIG-bench Machine Learning +3

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

no code implementations29 Aug 2017 Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli

This exposes learning algorithms to the threat of data poisoning, i. e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process.

Data Poisoning Handwritten Digit Recognition +1

Efficient Attack Graph Analysis through Approximate Inference

no code implementations22 Jun 2016 Luis Muñoz-González, Daniele Sgandurra, Andrea Paudice, Emil C. Lupu

We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages of approximate inference techniques to scale to larger attack graphs.

Bayesian Inference Clustering

Exact Inference Techniques for the Analysis of Bayesian Attack Graphs

no code implementations8 Oct 2015 Luis Muñoz-González, Daniele Sgandurra, Martín Barrère, Emil Lupu

Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise network resources.

Cannot find the paper you are looking for? You can Submit a new open access paper.