Search Results for author: Moustafa Alzantot

Found 10 papers, 5 papers with code

Generating Natural Language Adversarial Examples

5 code implementations EMNLP 2018 Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang

Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify.

Natural Language Inference Sentiment Analysis

GenAttack: Practical Black-box Attacks with Gradient-Free Optimization

3 code implementations28 May 2018 Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, huan zhang, Cho-Jui Hsieh, Mani Srivastava

Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches.

Adversarial Attack Adversarial Robustness +1

Deep Residual Neural Networks for Audio Spoofing Detection

1 code implementation30 Jun 2019 Moustafa Alzantot, Ziqi Wang, Mani B. Srivastava

Additionally, replay attacks where the attacker uses a speaker to replay a previously recorded genuine human speech are also possible.

Speaker Verification Speech Synthesis +1

Did you hear that? Adversarial Examples Against Automatic Speech Recognition

1 code implementation2 Jan 2018 Moustafa Alzantot, Bharathan Balaji, Mani Srivastava

Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

CAAD 2018: Generating Transferable Adversarial Examples

1 code implementation29 Sep 2018 Yash Sharma, Tien-Dung Le, Moustafa Alzantot

Our team participated in the CAAD 2018 competition, and won 1st place in both attack subtracks, non-targeted and targeted adversarial attacks, and 3rd place in defense.

Adversarial Attack Adversarial Defense +1

SenseGen: A Deep Learning Architecture for Synthetic Sensor Data Generation

no code implementations31 Jan 2017 Moustafa Alzantot, Supriyo Chakraborty, Mani B. Srivastava

second, we use another LSTM network based discriminator model for distinguishing between the true and the synthesized data.

NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning

no code implementations5 Aug 2019 Moustafa Alzantot, Amy Widdicombe, Simon Julier, Mani Srivastava

When applied to image classification models, NeuroMask identifies the image parts that are most important to classifier results by applying a mask that hides/reveals different parts of the image, before feeding it back into the model.

General Classification Image Classification

NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations

no code implementations18 Nov 2019 Xijie Huang, Moustafa Alzantot, Mani Srivastava

NeuronInspect first identifies the existence of backdoor attack targets by generating the explanation heatmap of the output layer.

Backdoor Attack Outlier Detection +1

PhysioGAN: Training High Fidelity Generative Model for Physiological Sensor Readings

no code implementations25 Apr 2022 Moustafa Alzantot, Luis Garcia, Mani Srivastava

Generative models such as the variational autoencoder (VAE) and the generative adversarial networks (GAN) have proven to be incredibly powerful for the generation of synthetic data that preserves statistical properties and utility of real-world datasets, especially in the context of image and natural language text.

Activity Recognition Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.