Universal Adversarial Perturbations (UAPs) are a prominent class of adversarial examples that exploit the systemic vulnerabilities and enable physically realizable and robust attacks against Deep Neural Networks (DNNs).
LiDARs play a critical role in Autonomous Vehicles' (AVs) perception and their safe operations.
In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization.
Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise.
Federated learning enables training collaborative machine learning models at scale with many participants whilst preserving the privacy of their datasets.
Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset.
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time.