Search Results for author: Marthie Grobler

Found 8 papers, 1 papers with code

Robust Training Using Natural Transformation

no code implementations10 May 2021 Shuo Wang, Lingjuan Lyu, Surya Nepal, Carsten Rudolph, Marthie Grobler, Kristen Moore

We target attributes of the input images that are independent of the class identification, and manipulate those attributes to mimic real-world natural transformations (NaTra) of the inputs, which are then used to augment the training dataset of the image classifier.

Attribute Data Augmentation +2

OCTOPUS: Overcoming Performance andPrivatization Bottlenecks in Distributed Learning

no code implementations3 May 2021 Shuo Wang, Surya Nepal, Kristen Moore, Marthie Grobler, Carsten Rudolph, Alsharif Abuadbba

We introduce a new distributed/collaborative learning scheme to address communication overhead via latent compression, leveraging global data while providing privatization of local data without additional cost due to encryption or perturbation.

Disentanglement Federated Learning

Adversarial Defense by Latent Style Transformations

no code implementations17 Jun 2020 Shuo Wang, Surya Nepal, Alsharif Abuadbba, Carsten Rudolph, Marthie Grobler

The intuition behind our approach is that the essential characteristics of a normal image are generally consistent with non-essential style transformations, e. g., slightly changing the facial expression of human portraits.

Adversarial Defense

Defending Adversarial Attacks via Semantic Feature Manipulation

no code implementations3 Feb 2020 Shuo Wang, Tianle Chen, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen

In this paper, we propose a one-off and attack-agnostic Feature Manipulation (FM)-Defense to detect and purify adversarial examples in an interpretable and efficient manner.

General Classification

OIAD: One-for-all Image Anomaly Detection with Disentanglement Learning

no code implementations18 Jan 2020 Shuo Wang, Tianle Chen, Shangyu Chen, Carsten Rudolph, Surya Nepal, Marthie Grobler

Our key insight is that the impact of small perturbation on the latent representation can be bounded for normal samples while anomaly images are usually outside such bounded intervals, referred to as structure consistency.

Anomaly Detection Disentanglement

Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models

no code implementations10 Jan 2020 Shuo Wang, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen, Tianle Chen

In this paper, we demonstrate a backdoor threat to transfer learning tasks on both image and time-series data leveraging the knowledge of publicly accessible Teacher models, aimed at defeating three commonly-adopted defenses: \textit{pruning-based}, \textit{retraining-based} and \textit{input pre-processing-based defenses}.

Electrocardiography (ECG) Electroencephalogram (EEG) +3

Cannot find the paper you are looking for? You can Submit a new open access paper.