Search Results for author: Atul Prakash

Found 23 papers, 3 papers with code

Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing

no code implementations18 May 2022 Ryan Feng, Somesh Jha, Atul Prakash

Preprocessing and outlier detection techniques have both been applied to neural networks to increase robustness with varying degrees of success.

BIG-bench Machine Learning object-detection +2

Concept-based Explanations for Out-Of-Distribution Detectors

no code implementations4 Mar 2022 Jihye Choi, Jayaram Raghuram, Ryan Feng, Jiefeng Chen, Somesh Jha, Atul Prakash

Based on these metrics, we propose a framework for learning a set of concepts that satisfy the desired properties of detection completeness and concept separability and demonstrate the framework's effectiveness in providing concept-based explanations for diverse OOD techniques.

OOD Detection

Towards Adversarially Robust Deepfake Detection: An Ensemble Approach

no code implementations11 Feb 2022 Ashish Hooda, Neal Mangaokar, Ryan Feng, Kassem Fawaz, Somesh Jha, Atul Prakash

Detecting deepfakes is an important problem, but recent work has shown that DNN-based deepfake detectors are brittle against adversarial deepfakes, in which an adversary adds imperceptible perturbations to a deepfake to evade detection.

DeepFake Detection Face Swapping

Sneakoscope: Revisiting Unsupervised Out-of-Distribution Detection

no code implementations29 Sep 2021 Tianji Cong, Atul Prakash

The problem of detecting out-of-distribution (OOD) examples in neural networks has been widely studied in the literature, with state-of-the-art techniques being supervised in that they require fine-tuning on OOD data to achieve high-quality OOD detection.

OOD Detection Out-of-Distribution Detection

Essential Features: Content-Adaptive Pixel Discretization to Improve Model Robustness to Adaptive Adversarial Attacks

no code implementations3 Dec 2020 Ryan Feng, Wu-chi Feng, Atul Prakash

Preprocessing defenses such as pixel discretization are appealing to remove adversarial attacks due to their simplicity.

Understanding and Diagnosing Vulnerability under Adversarial Attacks

no code implementations17 Jul 2020 Haizhong Zheng, Ziqi Zhang, Honglak Lee, Atul Prakash

Moreover, we design the first diagnostic method to quantify the vulnerability contributed by each layer, which can be used to identify vulnerable parts of model architectures.

Classification General Classification

Towards Robustness against Unsuspicious Adversarial Examples

no code implementations8 May 2020 Liang Tong, Minzhe Guo, Atul Prakash, Yevgeniy Vorobeychik

We then experimentally demonstrate that our attacks indeed do not significantly change perceptual salience of the background, but are highly effective against classifiers robust to conventional attacks.

GRAPHITE: Generating Automatic Physical Examples for Machine-Learning Attacks on Computer Vision Systems

1 code implementation17 Feb 2020 Ryan Feng, Neal Mangaokar, Jiefeng Chen, Earlence Fernandes, Somesh Jha, Atul Prakash

We address three key requirements for practical attacks for the real-world: 1) automatically constraining the size and shape of the attack so it can be applied with stickers, 2) transform-robustness, i. e., robustness of a attack to environmental physical variations such as viewpoint and lighting changes, and 3) supporting attacks in not only white-box, but also black-box hard-label scenarios, so that the adversary can attack proprietary models.

BIG-bench Machine Learning General Classification +1

Efficient Adversarial Training with Transferable Adversarial Examples

1 code implementation CVPR 2020 Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, Atul Prakash

Adversarial training is an effective defense method to protect classification models against adversarial attacks.

Can Attention Masks Improve Adversarial Robustness?

no code implementations27 Nov 2019 Pratik Vaishnavi, Tianji Cong, Kevin Eykholt, Atul Prakash, Amir Rahmati

Focusing on the observation that discrete pixelization in MNIST makes the background completely black and foreground completely white, we hypothesize that the important property for increasing robustness is the elimination of image background using attention masks before classifying an object.

Adversarial Robustness

Towards Model-Agnostic Adversarial Defenses using Adversarially Trained Autoencoders

no code implementations12 Sep 2019 Pratik Vaishnavi, Kevin Eykholt, Atul Prakash, Amir Rahmati

Numerous techniques have been proposed to harden machine learning algorithms and mitigate the effect of adversarial attacks.

Adversarial Defense Adversarial Robustness +1

Analyzing the Interpretability Robustness of Self-Explaining Models

no code implementations27 May 2019 Haizhong Zheng, Earlence Fernandes, Atul Prakash

Recently, interpretable models called self-explaining models (SEMs) have been proposed with the goal of providing interpretability robustness.

Robust Classification using Robust Feature Augmentation

no code implementations26 May 2019 Kevin Eykholt, Swati Gupta, Atul Prakash, Amir Rahmati, Pratik Vaishnavi, Haizhong Zheng

Existing deep neural networks, say for image classification, have been shown to be vulnerable to adversarial images that can cause a DNN misclassification, without any perceptible change to an image.

Binarization Classification +3

Designing Adversarially Resilient Classifiers using Resilient Feature Engineering

no code implementations17 Dec 2018 Kevin Eykholt, Atul Prakash

We provide a methodology, resilient feature engineering, for creating adversarially resilient classifiers.

Feature Engineering General Classification

Physical Adversarial Examples for Object Detectors

no code implementations20 Jul 2018 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song

In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene.

object-detection Object Detection

Robust Physical-World Attacks on Deep Learning Visual Classification

no code implementations CVPR 2018 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.

Classification General Classification

Tyche: Risk-Based Permissions for Smart Home Platforms

no code implementations14 Jan 2018 Amir Rahmati, Earlence Fernandes, Kevin Eykholt, Atul Prakash

When using risk-based permissions, device operations are grouped into units of similar risk, and users grant apps access to devices at that risk-based granularity.

Cryptography and Security

Note on Attacking Object Detectors with Adversarial Stickers

no code implementations21 Dec 2017 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Dawn Song, Tadayoshi Kohno, Amir Rahmati, Atul Prakash, Florian Tramer

Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples.

Robust Physical-World Attacks on Deep Learning Models

1 code implementation27 Jul 2017 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions.

Internet of Things Security Research: A Rehash of Old Ideas or New Intellectual Challenges?

no code implementations23 May 2017 Earlence Fernandes, Amir Rahmati, Kevin Eykholt, Atul Prakash

The Internet of Things (IoT) is a new computing paradigm that spans wearable devices, homes, hospitals, cities, transportation, and critical infrastructure.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.