no code implementations • 15 Oct 2024 • Kevin Eykholt, Farhan Ahmed, Pratik Vaishnavi, Amir Rahmati
The vulnerability of machine learning models in adversarial scenarios has garnered significant interest in the academic community over the past decade, resulting in a myriad of attacks and defenses.
1 code implementation • 3 Aug 2023 • Kevin Eykholt, Taesung Lee, Douglas Schales, Jiyong Jang, Ian Molloy, Masha Zorin
In this work, we propose a new framework to enable the generation of adversarial inputs irrespective of the input type and task domain.
1 code implementation • 25 Oct 2022 • Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati
Training deep neural network classifiers that are certifiably robust against adversarial attacks is critical to ensuring the security and reliability of AI-controlled systems.
1 code implementation • 24 Oct 2022 • Farhan Ahmed, Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati
Since the discovery of adversarial attacks against machine learning models nearly a decade ago, research on adversarial machine learning has rapidly evolved into an eternal war between defenders, who seek to increase the robustness of ML models against adversarial attacks, and adversaries, who seek to develop better attacks capable of weakening or defeating these defenses.
1 code implementation • 21 Feb 2022 • Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati
On CIFAR-10, RRM trains a robust model $\sim 1. 8\times$ faster than the state-of-the-art.
no code implementations • 19 May 2021 • Pau-Chen Cheng, Kevin Eykholt, Zhongshu Gu, Hani Jamjoom, K. R. Jayaram, Enriquillo Valdez, Ashish Verma
In this paper, we introduce TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture to break down information concentration with regard to a single aggregator.
no code implementations • 14 Dec 2020 • Shiqi Wang, Kevin Eykholt, Taesung Lee, Jiyong Jang, Ian Molloy
On CIFAR10, a non-robust LeNet model has a 21. 63% error rate, while a model created using verifiable training and a L-infinity robustness criterion of 8/255, has an error rate of 57. 10%.
no code implementations • 27 Nov 2019 • Pratik Vaishnavi, Tianji Cong, Kevin Eykholt, Atul Prakash, Amir Rahmati
Focusing on the observation that discrete pixelization in MNIST makes the background completely black and foreground completely white, we hypothesize that the important property for increasing robustness is the elimination of image background using attention masks before classifying an object.
no code implementations • 12 Sep 2019 • Pratik Vaishnavi, Kevin Eykholt, Atul Prakash, Amir Rahmati
Numerous techniques have been proposed to harden machine learning algorithms and mitigate the effect of adversarial attacks.
no code implementations • 26 May 2019 • Kevin Eykholt, Swati Gupta, Atul Prakash, Amir Rahmati, Pratik Vaishnavi, Haizhong Zheng
Existing deep neural networks, say for image classification, have been shown to be vulnerable to adversarial images that can cause a DNN misclassification, without any perceptible change to an image.
no code implementations • 17 Dec 2018 • Kevin Eykholt, Atul Prakash
We provide a methodology, resilient feature engineering, for creating adversarially resilient classifiers.
no code implementations • 20 Jul 2018 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song
In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene.
no code implementations • CVPR 2018 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song
Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.
no code implementations • 14 Jan 2018 • Amir Rahmati, Earlence Fernandes, Kevin Eykholt, Atul Prakash
When using risk-based permissions, device operations are grouped into units of similar risk, and users grant apps access to devices at that risk-based granularity.
Cryptography and Security
no code implementations • 21 Dec 2017 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Dawn Song, Tadayoshi Kohno, Amir Rahmati, Atul Prakash, Florian Tramer
Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples.
1 code implementation • 27 Jul 2017 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song
We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions.
no code implementations • 23 May 2017 • Earlence Fernandes, Amir Rahmati, Kevin Eykholt, Atul Prakash
The Internet of Things (IoT) is a new computing paradigm that spans wearable devices, homes, hospitals, cities, transportation, and critical infrastructure.
Cryptography and Security