no code implementations • 3 Feb 2024 • Xi Li, Hang Wang, David J. Miller, George Kesidis
A variety of defenses have been proposed against backdoors attacks on deep neural network (DNN) classifiers.
no code implementations • 29 Jan 2024 • Aakash Sharma, Vivek M. Bhasi, Sonali Singh, George Kesidis, Mahmut T. Kandemir, Chita R. Das
We propose a novel GPU-cluster scheduler for distributed DL (DDL) workloads that enables proximity based consolidation of GPU resources based on the DDL jobs' sensitivities to the anticipated communication-network delays.
no code implementations • 28 Sep 2023 • Hang Wang, David J. Miller, George Kesidis
Well-known (non-malicious) sources of overfitting in deep neural net (DNN) classifiers include: i) large class imbalances; ii) insufficient training-set diversity; and iii) over-training.
no code implementations • 21 Aug 2023 • Xi Li, Songhe Wang, Ruiquan Huang, Mahanth Gowda, George Kesidis
Although there are extensive studies on backdoor attacks against image data, the susceptibility of video-based systems under backdoor attacks remains largely unexplored.
no code implementations • 18 Aug 2023 • Xi Li, Zhen Xiang, David J. Miller, George Kesidis
Backdoor (Trojan) attacks are an important type of adversarial exploit against deep neural networks (DNNs), wherein a test instance is (mis)classified to the attacker's target class whenever the attacker's backdoor trigger is present.
1 code implementation • 8 Aug 2023 • Hang Wang, Zhen Xiang, David J. Miller, George Kesidis
Deep neural networks are vulnerable to backdoor attacks (Trojans), where an attacker poisons the training set with backdoor triggers so that the neural network learns to classify test-time triggers to the attacker's designated target class.
no code implementations • 30 Aug 2022 • Aakash Sharma, Vivek M. Bhasi, Sonali Singh, Rishabh Jain, Jashwant Raj Gunasekaran, Subrata Mitra, Mahmut Taylan Kandemir, George Kesidis, Chita R. Das
We aim to resolve this problem by introducing a comprehensive distributed deep learning (DDL) profiler, which can determine the various execution "stalls" that DDL suffers from while running on a public cloud.
1 code implementation • 13 May 2022 • Hang Wang, Zhen Xiang, David J. Miller, George Kesidis
Our detector leverages the influence of the backdoor attack, independent of the backdoor embedding mechanism, on the landscape of the classifier's outputs prior to the softmax layer.
1 code implementation • ICLR 2022 • Zhen Xiang, David J. Miller, George Kesidis
We show that our ET statistic is effective {\it using the same detection threshold}, irrespective of the classification domain, the attack configuration, and the BP reverse-engineering algorithm that is used.
no code implementations • 6 Dec 2021 • Xi Li, Zhen Xiang, David J. Miller, George Kesidis
A DNN being attacked will predict to an attacker-desired target class whenever a test sample from any source class is embedded with a backdoor pattern; while correctly classifying clean (attack-free) test samples.
no code implementations • 20 Oct 2021 • Zhen Xiang, David J. Miller, Siheng Chen, Xi Li, George Kesidis
Backdoor attacks (BA) are an emerging threat to deep neural network classifiers.
no code implementations • 6 Sep 2021 • Xi Li, George Kesidis, David J. Miller, Vladimir Lucic
We demonstrate a backdoor attack on a deep neural network used for regression.
no code implementations • 28 Jul 2021 • Xi Li, George Kesidis, David J. Miller, Maxime Bergeron, Ryan Ferguson, Vladimir Lucic
We describe a gradient-based method to discover local error maximizers of a deep neural network (DNN) used for regression, assuming the availability of an "oracle" capable of providing real-valued supervision (a regression target) for samples.
no code implementations • 28 May 2021 • Xi Li, David J. Miller, Zhen Xiang, George Kesidis
Data Poisoning (DP) is an effective attack that causes trained classifiers to misclassify their inputs.
1 code implementation • 21 May 2021 • Hang Wang, David J. Miller, George Kesidis
Deep Neural Networks (DNNs) have been shown vulnerable to Test-Time Evasion attacks (TTEs, or adversarial examples), which, by making small changes to the input, alter the DNN's decision.
no code implementations • 20 Oct 2020 • Zhen Xiang, David J. Miller, George Kesidis
Unfortunately, most existing REDs rely on an unrealistic assumption that all classes except the target class are source classes of the attack.
no code implementations • 15 Oct 2020 • Zhen Xiang, David J. Miller, George Kesidis
The attacker poisons the training set with a relatively small set of images from one (or several) source class(es), embedded with a backdoor pattern and labeled to a target class.
no code implementations • 18 Nov 2019 • Zhen Xiang, David J. Miller, George Kesidis
Here, we address post-training detection of innocuous perceptible backdoors in DNN image classifiers, wherein the defender does not have access to the poisoned training set, but only to the trained classifier, as well as unpoisoned examples.
no code implementations • 15 Oct 2019 • George Kesidis, David J. Miller, Zhen Xiang
We provide a new local class-purity theorem for Lipschitz continuous DNN classifiers.
no code implementations • 27 Aug 2019 • Zhen Xiang, David J. Miller, George Kesidis
Here we address post-training detection of backdoor attacks in DNN image classifiers, seldom considered in existing works, wherein the defender does not have access to the poisoned training set, but only to the trained classifier itself, as well as to clean examples from the classification domain.
1 code implementation • 18 May 2019 • George Kesidis, Nader Alfares, Xi Li, Bhuvan Urgaonkar, Mahmut Kandemir, Takis Konstantopoulos
We consider a content-caching system thatis shared by a number of proxies.
Performance Networking and Internet Architecture
no code implementations • 12 Apr 2019 • David J. Miller, Zhen Xiang, George Kesidis
After introducing relevant terminology and the goals and range of possible knowledge of both attackers and defenders, we survey recent work on test-time evasion (TTE), data poisoning (DP), and reverse engineering (RE) attacks and particularly defenses against same.
no code implementations • 31 Oct 2018 • David J. Miller, Xinyi Hu, Zhen Xiang, George Kesidis
Such attacks are successful mainly because of the poor representation power of the naive Bayes (NB) model, with only a single (component) density to represent spam (plus a possible attack).
no code implementations • 18 Dec 2017 • David J. Miller, Yulia Wang, George Kesidis
Tested on MNIST and CIFAR-10 image databases under three prominent attack strategies, our approach outperforms previous detection methods, achieving strong ROC AUC detection accuracy on two attacks and better accuracy than recently reported for a variety of methods on the strongest (CW) attack.
no code implementations • 10 Jun 2015 • Zhicong Qiu, David J. Miller, George Kesidis
In this work, we develop a group anomaly detection (GAD) scheme to identify the subset of samples and subset of features that jointly specify an anomalous cluster.