Search Results for author: Michael E. Kounavis

Found 4 papers, 1 papers with code

Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression

no code implementations8 May 2017 Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E. Kounavis, Duen Horng Chau

Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition.

Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression

3 code implementations19 Feb 2018 Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Siwei Li, Li Chen, Michael E. Kounavis, Duen Horng Chau

The rapidly growing body of research in adversarial machine learning has demonstrated that deep neural networks (DNNs) are highly vulnerable to adversarially generated images.

ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio

no code implementations30 May 2018 Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Li Chen, Michael E. Kounavis, Duen Horng Chau

Adversarial machine learning research has recently demonstrated the feasibility to confuse automatic speech recognition (ASR) models by introducing acoustically imperceptible perturbations to audio samples.

Adversarial Attack Automatic Speech Recognition +2

The Efficacy of SHIELD under Different Threat Models

no code implementations1 Feb 2019 Cory Cornelius, Nilaksh Das, Shang-Tse Chen, Li Chen, Michael E. Kounavis, Duen Horng Chau

To evaluate the robustness of the defense against an adaptive attacker, we consider the targeted-attack success rate of the Projected Gradient Descent (PGD) attack, which is a strong gradient-based adversarial attack proposed in adversarial machine learning research.

Adversarial Attack Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.