Search Results for author: George Kesidis

Found 25 papers, 5 papers with code

Universal Post-Training Reverse-Engineering Defense Against Backdoors in Deep Neural Networks

no code implementations3 Feb 2024 Xi Li, Hang Wang, David J. Miller, George Kesidis

A variety of defenses have been proposed against backdoors attacks on deep neural network (DNN) classifiers.

GPU Cluster Scheduling for Network-Sensitive Deep Learning

no code implementations29 Jan 2024 Aakash Sharma, Vivek M. Bhasi, Sonali Singh, George Kesidis, Mahmut T. Kandemir, Chita R. Das

We propose a novel GPU-cluster scheduler for distributed DL (DDL) workloads that enables proximity based consolidation of GPU resources based on the DDL jobs' sensitivities to the anticipated communication-network delays.

Scheduling

Post-Training Overfitting Mitigation in DNN Classifiers

no code implementations28 Sep 2023 Hang Wang, David J. Miller, George Kesidis

Well-known (non-malicious) sources of overfitting in deep neural net (DNN) classifiers include: i) large class imbalances; ii) insufficient training-set diversity; and iii) over-training.

Data Poisoning

Temporal-Distributed Backdoor Attack Against Video Based Action Recognition

no code implementations21 Aug 2023 Xi Li, Songhe Wang, Ruiquan Huang, Mahanth Gowda, George Kesidis

Although there are extensive studies on backdoor attacks against image data, the susceptibility of video-based systems under backdoor attacks remains largely unexplored.

Action Recognition Backdoor Attack +3

Backdoor Mitigation by Correcting the Distribution of Neural Activations

no code implementations18 Aug 2023 Xi Li, Zhen Xiang, David J. Miller, George Kesidis

Backdoor (Trojan) attacks are an important type of adversarial exploit against deep neural networks (DNNs), wherein a test instance is (mis)classified to the attacker's target class whenever the attacker's backdoor trigger is present.

Improved Activation Clipping for Universal Backdoor Mitigation and Test-Time Detection

1 code implementation8 Aug 2023 Hang Wang, Zhen Xiang, David J. Miller, George Kesidis

Deep neural networks are vulnerable to backdoor attacks (Trojans), where an attacker poisons the training set with backdoor triggers so that the neural network learns to classify test-time triggers to the attacker's designated target class.

Image Classification

Analysis of Distributed Deep Learning in the Cloud

no code implementations30 Aug 2022 Aakash Sharma, Vivek M. Bhasi, Sonali Singh, Rishabh Jain, Jashwant Raj Gunasekaran, Subrata Mitra, Mahmut Taylan Kandemir, George Kesidis, Chita R. Das

We aim to resolve this problem by introducing a comprehensive distributed deep learning (DDL) profiler, which can determine the various execution "stalls" that DDL suffers from while running on a public cloud.

MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin Statistic

1 code implementation13 May 2022 Hang Wang, Zhen Xiang, David J. Miller, George Kesidis

Our detector leverages the influence of the backdoor attack, independent of the backdoor embedding mechanism, on the landscape of the classifier's outputs prior to the softmax layer.

Backdoor Attack backdoor defense +1

Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios

1 code implementation ICLR 2022 Zhen Xiang, David J. Miller, George Kesidis

We show that our ET statistic is effective {\it using the same detection threshold}, irrespective of the classification domain, the attack configuration, and the BP reverse-engineering algorithm that is used.

Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks

no code implementations6 Dec 2021 Xi Li, Zhen Xiang, David J. Miller, George Kesidis

A DNN being attacked will predict to an attacker-desired target class whenever a test sample from any source class is embedded with a backdoor pattern; while correctly classifying clean (attack-free) test samples.

Backdoor Attack Image Classification

Robust and Active Learning for Deep Neural Network Regression

no code implementations28 Jul 2021 Xi Li, George Kesidis, David J. Miller, Maxime Bergeron, Ryan Ferguson, Vladimir Lucic

We describe a gradient-based method to discover local error maximizers of a deep neural network (DNN) used for regression, assuming the availability of an "oracle" capable of providing real-valued supervision (a regression target) for samples.

Active Learning regression

A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers

no code implementations28 May 2021 Xi Li, David J. Miller, Zhen Xiang, George Kesidis

Data Poisoning (DP) is an effective attack that causes trained classifiers to misclassify their inputs.

Data Poisoning

Anomaly Detection of Adversarial Examples using Class-conditional Generative Adversarial Networks

1 code implementation21 May 2021 Hang Wang, David J. Miller, George Kesidis

Deep Neural Networks (DNNs) have been shown vulnerable to Test-Time Evasion attacks (TTEs, or adversarial examples), which, by making small changes to the input, alter the DNN's decision.

Anomaly Detection Image Classification

L-RED: Efficient Post-Training Detection of Imperceptible Backdoor Attacks without Access to the Training Set

no code implementations20 Oct 2020 Zhen Xiang, David J. Miller, George Kesidis

Unfortunately, most existing REDs rely on an unrealistic assumption that all classes except the target class are source classes of the attack.

Adversarial Attack

Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing

no code implementations15 Oct 2020 Zhen Xiang, David J. Miller, George Kesidis

The attacker poisons the training set with a relatively small set of images from one (or several) source class(es), embedded with a backdoor pattern and labeled to a target class.

Adversarial Attack Data Poisoning

Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic

no code implementations18 Nov 2019 Zhen Xiang, David J. Miller, George Kesidis

Here, we address post-training detection of innocuous perceptible backdoors in DNN image classifiers, wherein the defender does not have access to the poisoned training set, but only to the trained classifier, as well as unpoisoned examples.

Data Poisoning

Detection of Backdoors in Trained Classifiers Without Access to the Training Set

no code implementations27 Aug 2019 Zhen Xiang, David J. Miller, George Kesidis

Here we address post-training detection of backdoor attacks in DNN image classifiers, seldom considered in existing works, wherein the defender does not have access to the poisoned training set, but only to the trained classifier itself, as well as to clean examples from the classification domain.

Data Poisoning Unsupervised Anomaly Detection

On a caching system with object sharing

1 code implementation18 May 2019 George Kesidis, Nader Alfares, Xi Li, Bhuvan Urgaonkar, Mahmut Kandemir, Takis Konstantopoulos

We consider a content-caching system thatis shared by a number of proxies.

Performance Networking and Internet Architecture

Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks

no code implementations12 Apr 2019 David J. Miller, Zhen Xiang, George Kesidis

After introducing relevant terminology and the goals and range of possible knowledge of both attackers and defenders, we survey recent work on test-time evasion (TTE), data poisoning (DP), and reverse engineering (RE) attacks and particularly defenses against same.

Anomaly Detection Data Poisoning +2

A Mixture Model Based Defense for Data Poisoning Attacks Against Naive Bayes Spam Filters

no code implementations31 Oct 2018 David J. Miller, Xinyi Hu, Zhen Xiang, George Kesidis

Such attacks are successful mainly because of the poor representation power of the naive Bayes (NB) model, with only a single (component) density to represent spam (plus a possible attack).

Data Poisoning

When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time

no code implementations18 Dec 2017 David J. Miller, Yulia Wang, George Kesidis

Tested on MNIST and CIFAR-10 image databases under three prominent attack strategies, our approach outperforms previous detection methods, achieving strong ROC AUC detection accuracy on two attacks and better accuracy than recently reported for a variety of methods on the strongest (CW) attack.

Anomaly Detection General Classification

Detecting Clusters of Anomalies on Low-Dimensional Feature Subsets with Application to Network Traffic Flow Data

no code implementations10 Jun 2015 Zhicong Qiu, David J. Miller, George Kesidis

In this work, we develop a group anomaly detection (GAD) scheme to identify the subset of samples and subset of features that jointly specify an anomalous cluster.

Group Anomaly Detection Network Intrusion Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.