Search Results for author: Kenneth T. Co

Found 8 papers, 5 papers with code

Real-time Detection of Practical Universal Adversarial Perturbations

no code implementations16 May 2021 Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Emil C. Lupu

Universal Adversarial Perturbations (UAPs) are a prominent class of adversarial examples that exploit the systemic vulnerabilities and enable physically realizable and robust attacks against Deep Neural Networks (DNNs).

Image Classification Object Detection

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

1 code implementation21 Apr 2021 Kenneth T. Co, David Martinez Rego, Emil C. Lupu

Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data.

Robustness and Transferability of Universal Attacks on Compressed Models

1 code implementation10 Dec 2020 Alberto G. Matachana, Kenneth T. Co, Luis Muñoz-González, David Martinez, Emil C. Lupu

In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization.

Neural Network Compression Quantization

Universal Adversarial Robustness of Texture and Shape-Biased Models

1 code implementation23 Nov 2019 Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, Emil C. Lupu

Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise.

Adversarial Robustness Image Classification

Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

no code implementations11 Sep 2019 Luis Muñoz-González, Kenneth T. Co, Emil C. Lupu

Federated learning enables training collaborative machine learning models at scale with many participants whilst preserving the privacy of their datasets.

Federated Learning

Sensitivity of Deep Convolutional Networks to Gabor Noise

1 code implementation ICML Workshop Deep_Phenomen 2019 Kenneth T. Co, Luis Muñoz-González, Emil C. Lupu

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset.

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks

2 code implementations30 Sep 2018 Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, Emil C. Lupu

Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time.

Cannot find the paper you are looking for? You can Submit a new open access paper.