no code implementations • 19 Apr 2022 • Kenneth T. Co, David Martinez-Rego, Zhongyuan Hau, Emil C. Lupu
In this work, we propose a novel approach, Jacobian Ensembles-a combination of Jacobian regularization and model ensembles to significantly increase the robustness against UAPs whilst maintaining or improving model accuracy.
no code implementations • 16 May 2021 • Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Emil C. Lupu
Universal Adversarial Perturbations (UAPs) are a prominent class of adversarial examples that exploit the systemic vulnerabilities and enable physically realizable and robust attacks against Deep Neural Networks (DNNs).
1 code implementation • 21 Apr 2021 • Kenneth T. Co, David Martinez Rego, Emil C. Lupu
Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data.
no code implementations • 7 Feb 2021 • Zhongyuan Hau, Kenneth T. Co, Soteris Demetriou, Emil C. Lupu
LiDARs play a critical role in Autonomous Vehicles' (AVs) perception and their safe operations.
1 code implementation • 10 Dec 2020 • Alberto G. Matachana, Kenneth T. Co, Luis Muñoz-González, David Martinez, Emil C. Lupu
In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization.
1 code implementation • 23 Nov 2019 • Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, Emil C. Lupu
Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise.
no code implementations • 11 Sep 2019 • Luis Muñoz-González, Kenneth T. Co, Emil C. Lupu
Federated learning enables training collaborative machine learning models at scale with many participants whilst preserving the privacy of their datasets.
1 code implementation • ICML Workshop Deep_Phenomen 2019 • Kenneth T. Co, Luis Muñoz-González, Emil C. Lupu
Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset.
2 code implementations • 30 Sep 2018 • Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, Emil C. Lupu
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time.