no code implementations • 23 Dec 2024 • Herve Debar, Sven Dietrich, Pavel Laskov, Emil C. Lupu, Eirini Ntoutsi
So the concerns on the potential security implications of such wide scale adoption of LLMs have led to the creation of this working group on the security of LLMs.
no code implementations • 2 Jun 2023 • Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu
We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters and models the attack as a multiobjective bilevel optimization problem.
no code implementations • 29 Apr 2022 • Zhongyuan Hau, Soteris Demetriou, Emil C. Lupu
We achieve this by searching for void regions and locating the obstacles that cause these shadows.
no code implementations • 19 Apr 2022 • Kenneth T. Co, David Martinez-Rego, Zhongyuan Hau, Emil C. Lupu
In this work, we propose a novel approach, Jacobian Ensembles-a combination of Jacobian regularization and model ensembles to significantly increase the robustness against UAPs whilst maintaining or improving model accuracy.
no code implementations • 23 May 2021 • Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu
Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance.
no code implementations • 16 May 2021 • Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Emil C. Lupu
Universal Adversarial Perturbations (UAPs) are a prominent class of adversarial examples that exploit the systemic vulnerabilities and enable physically realizable and robust attacks against Deep Neural Networks (DNNs).
1 code implementation • 21 Apr 2021 • Kenneth T. Co, David Martinez Rego, Emil C. Lupu
Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data.
no code implementations • 7 Feb 2021 • Zhongyuan Hau, Kenneth T. Co, Soteris Demetriou, Emil C. Lupu
LiDARs play a critical role in Autonomous Vehicles' (AVs) perception and their safe operations.
1 code implementation • 10 Dec 2020 • Alberto G. Matachana, Kenneth T. Co, Luis Muñoz-González, David Martinez, Emil C. Lupu
In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization.
no code implementations • 28 Feb 2020 • Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu
We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters by modelling the attack as a multiobjective bilevel optimisation problem.
1 code implementation • 23 Nov 2019 • Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, Emil C. Lupu
Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise.
no code implementations • 11 Sep 2019 • Luis Muñoz-González, Kenneth T. Co, Emil C. Lupu
Federated learning enables training collaborative machine learning models at scale with many participants whilst preserving the privacy of their datasets.
1 code implementation • 18 Jun 2019 • Luis Muñoz-González, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, Emil C. Lupu
In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i. e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.
1 code implementation • ICML Workshop Deep_Phenomen 2019 • Kenneth T. Co, Luis Muñoz-González, Emil C. Lupu
Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset.
no code implementations • 30 Apr 2019 • Erisa Karafili, Linna Wang, Emil C. Lupu
In this work, we propose an argumentation-based reasoner (ABR) as a proof-of-concept tool that can help a forensics analyst during the analysis of forensic evidence and the attribution process.
2 code implementations • 30 Sep 2018 • Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, Emil C. Lupu
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time.
no code implementations • 16 Aug 2018 • Ziyi Bao, Luis Muñoz-González, Emil C. Lupu
We propose a design methodology to evaluate the security of machine learning classifiers with embedded feature selection against adversarial examples crafted using different attack strategies.
no code implementations • 2 Mar 2018 • Andrea Paudice, Luis Muñoz-González, Emil C. Lupu
Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points.
1 code implementation • 8 Feb 2018 • Andrea Paudice, Luis Muñoz-González, Andras Gyorgy, Emil C. Lupu
We show empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack.
no code implementations • 29 Aug 2017 • Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli
This exposes learning algorithms to the threat of data poisoning, i. e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process.
no code implementations • 1 May 2017 • Erisa Karafili, Antonis C. Kakas, Nikolaos I. Spanoudakis, Emil C. Lupu
The increase of connectivity and the impact it has in every day life is raising new and existing security problems that are becoming important for social good.
no code implementations • 22 Jun 2016 • Luis Muñoz-González, Daniele Sgandurra, Andrea Paudice, Emil C. Lupu
We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages of approximate inference techniques to scale to larger attack graphs.