Search Results for author: Carsten Rudolph

Found 11 papers, 0 papers with code

Guarding the Grid: Enhancing Resilience in Automated Residential Demand Response Against False Data Injection Attacks

no code implementations14 Dec 2023 Thusitha Dayaratne, Carsten Rudolph, Ariel Liebman, Mahsa Salehi

Utility companies are increasingly leveraging residential demand flexibility and the proliferation of smart/IoT devices to enhance the effectiveness of residential demand response (DR) programs through automated device scheduling.

Anomaly Detection Decision Making +1

RAI4IoE: Responsible AI for Enabling the Internet of Energy

no code implementations20 Sep 2023 Minhui Xue, Surya Nepal, Ling Liu, Subbu Sethuvenkatraman, Xingliang Yuan, Carsten Rudolph, Ruoxi Sun, Greg Eisenhauer

This paper plans to develop an Equitable and Responsible AI framework with enabling techniques and algorithms for the Internet of Energy (IoE), in short, RAI4IoE.

Management

Email Summarization to Assist Users in Phishing Identification

no code implementations24 Mar 2022 Amir Kashapov, Tingmin Wu, Alsharif Abuadbba, Carsten Rudolph

Cyber-phishing attacks recently became more precise, targeted, and tailored by training data to activate only in the presence of specific information or cues.

Robust Training Using Natural Transformation

no code implementations10 May 2021 Shuo Wang, Lingjuan Lyu, Surya Nepal, Carsten Rudolph, Marthie Grobler, Kristen Moore

We target attributes of the input images that are independent of the class identification, and manipulate those attributes to mimic real-world natural transformations (NaTra) of the inputs, which are then used to augment the training dataset of the image classifier.

Attribute Data Augmentation +2

OCTOPUS: Overcoming Performance andPrivatization Bottlenecks in Distributed Learning

no code implementations3 May 2021 Shuo Wang, Surya Nepal, Kristen Moore, Marthie Grobler, Carsten Rudolph, Alsharif Abuadbba

We introduce a new distributed/collaborative learning scheme to address communication overhead via latent compression, leveraging global data while providing privatization of local data without additional cost due to encryption or perturbation.

Disentanglement Federated Learning

Adversarial Defense by Latent Style Transformations

no code implementations17 Jun 2020 Shuo Wang, Surya Nepal, Alsharif Abuadbba, Carsten Rudolph, Marthie Grobler

The intuition behind our approach is that the essential characteristics of a normal image are generally consistent with non-essential style transformations, e. g., slightly changing the facial expression of human portraits.

Adversarial Defense

Defending Adversarial Attacks via Semantic Feature Manipulation

no code implementations3 Feb 2020 Shuo Wang, Tianle Chen, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen

In this paper, we propose a one-off and attack-agnostic Feature Manipulation (FM)-Defense to detect and purify adversarial examples in an interpretable and efficient manner.

General Classification

OIAD: One-for-all Image Anomaly Detection with Disentanglement Learning

no code implementations18 Jan 2020 Shuo Wang, Tianle Chen, Shangyu Chen, Carsten Rudolph, Surya Nepal, Marthie Grobler

Our key insight is that the impact of small perturbation on the latent representation can be bounded for normal samples while anomaly images are usually outside such bounded intervals, referred to as structure consistency.

Anomaly Detection Disentanglement

Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models

no code implementations10 Jan 2020 Shuo Wang, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen, Tianle Chen

In this paper, we demonstrate a backdoor threat to transfer learning tasks on both image and time-series data leveraging the knowledge of publicly accessible Teacher models, aimed at defeating three commonly-adopted defenses: \textit{pruning-based}, \textit{retraining-based} and \textit{input pre-processing-based defenses}.

Electrocardiography (ECG) Electroencephalogram (EEG) +3

Defeating Misclassification Attacks Against Transfer Learning

no code implementations29 Aug 2019 Bang Wu, Shuo Wang, Xingliang Yuan, Cong Wang, Carsten Rudolph, Xiangwen Yang

To avoid the bloated ensemble size during inference, we propose a two-phase defence, in which inference from the Student model is firstly performed to narrow down the candidate differentiators to be assembled, and later only a small, fixed number of them can be chosen to validate clean or reject adversarial inputs effectively.

Network Pruning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.