Search Results for author: Shang-Tse Chen

Found 8 papers, 4 papers with code

UnMask: Adversarial Detection and Defense Through Robust Feature Alignment

1 code implementation21 Feb 2020 Scott Freitas, Shang-Tse Chen, Zijie J. Wang, Duen Horng Chau

UnMask detects such attacks and defends the model by rectifying the misclassification, re-classifying the image based on its robust features.

Medical Diagnosis Self-Driving Cars

Talk Proposal: Towards the Realistic Evaluation of Evasion Attacks using CARLA

3 code implementations18 Apr 2019 Cory Cornelius, Shang-Tse Chen, Jason Martin, Duen Horng Chau

In this talk we describe our content-preserving attack on object detectors, ShapeShifter, and demonstrate how to evaluate this threat in realistic scenarios.

The Efficacy of SHIELD under Different Threat Models

no code implementations1 Feb 2019 Cory Cornelius, Nilaksh Das, Shang-Tse Chen, Li Chen, Michael E. Kounavis, Duen Horng Chau

To evaluate the robustness of the defense against an adaptive attacker, we consider the targeted-attack success rate of the Projected Gradient Descent (PGD) attack, which is a strong gradient-based adversarial attack proposed in adversarial machine learning research.

Adversarial Attack Image Classification

ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio

no code implementations30 May 2018 Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Li Chen, Michael E. Kounavis, Duen Horng Chau

Adversarial machine learning research has recently demonstrated the feasibility to confuse automatic speech recognition (ASR) models by introducing acoustically imperceptible perturbations to audio samples.

Adversarial Attack Speech Recognition

ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector

3 code implementations16 Apr 2018 Shang-Tse Chen, Cory Cornelius, Jason Martin, Duen Horng Chau

Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work.

Adversarial Attack Autonomous Vehicles +3

Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression

2 code implementations19 Feb 2018 Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Siwei Li, Li Chen, Michael E. Kounavis, Duen Horng Chau

The rapidly growing body of research in adversarial machine learning has demonstrated that deep neural networks (DNNs) are highly vulnerable to adversarially generated images.

Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression

no code implementations8 May 2017 Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E. Kounavis, Duen Horng Chau

Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition.

Communication Efficient Distributed Agnostic Boosting

no code implementations21 Jun 2015 Shang-Tse Chen, Maria-Florina Balcan, Duen Horng Chau

We consider the problem of learning from distributed data in the agnostic setting, i. e., in the presence of arbitrary forms of noise.

Cannot find the paper you are looking for? You can Submit a new open access paper.