Search Results for author: Prateek Mittal

Found 40 papers, 24 papers with code

Formulating Robustness Against Unforeseen Attacks

no code implementations28 Apr 2022 Sihui Dai, Saeed Mahloujifar, Prateek Mittal

Existing defenses against adversarial examples such as adversarial training typically assume that the adversary will conform to a specific or known threat model, such as $\ell_p$ perturbations within a fixed budget.

ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding Attacks via Patch-agnostic Masking

1 code implementation3 Feb 2022 Chong Xiang, Alexander Valtchanov, Saeed Mahloujifar, Prateek Mittal

The core operation of ObjectSeeker is patch-agnostic masking: we aim to mask out the entire adversarial patch without any prior knowledge of the shape, size, and location of the patch.

Autonomous Vehicles Robust Object Detection

SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification

1 code implementation12 Dec 2021 Ashwinee Panda, Saeed Mahloujifar, Arjun N. Bhagoji, Supriyo Chakraborty, Prateek Mittal

Federated learning is inherently vulnerable to model poisoning attacks because its decentralized nature allows attackers to participate with compromised devices.

Federated Learning

Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture

no code implementations15 Oct 2021 Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal

The goal of this work is to train ML models that have high membership privacy while largely preserving their utility; we therefore aim for an empirical membership privacy guarantee as opposed to the provable privacy guarantees provided by techniques like differential privacy, as such techniques are shown to deteriorate model utility.

Parameterizing Activation Functions for Adversarial Robustness

no code implementations11 Oct 2021 Sihui Dai, Saeed Mahloujifar, Prateek Mittal

To address this, we analyze the direct impact of activation shape on robustness through PAFs and observe that activation shapes with positive outputs on negative inputs and with high finite curvature can increase robustness.

Adversarial Robustness

PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier

1 code implementation20 Aug 2021 Chong Xiang, Saeed Mahloujifar, Prateek Mittal

Remarkably, PatchCleanser achieves 83. 9% top-1 clean accuracy and 62. 1% top-1 certified robust accuracy against a 2%-pixel square patch anywhere on the image for the 1000-class ImageNet dataset.

Image Classification

PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches

1 code implementation26 Apr 2021 Chong Xiang, Prateek Mittal

Recent provably robust defenses generally follow the PatchGuard framework by using CNNs with small receptive fields and secure feature aggregation for robust model predictions.

Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries

1 code implementation16 Apr 2021 Arjun Nitin Bhagoji, Daniel Cullina, Vikash Sehwag, Prateek Mittal

In particular, it is critical to determine classifier-agnostic bounds on the training loss to establish when learning is possible.

DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks

1 code implementation5 Feb 2021 Chong Xiang, Prateek Mittal

In this paper, we propose DetectorGuard as the first general framework for building provably robust object detectors against localized patch hiding attacks.

Image Classification Robust Object Detection

Enabling Efficient Cyber Threat Hunting With Cyber Threat Intelligence

no code implementations26 Oct 2020 Peng Gao, Fei Shao, Xiaoyuan Liu, Xusheng Xiao, Zheng Qin, Fengyuan Xu, Prateek Mittal, Sanjeev R. Kulkarni, Dawn Song

Log-based cyber threat hunting has emerged as an important solution to counter sophisticated attacks.

RobustBench: a standardized adversarial robustness benchmark

1 code implementation19 Oct 2020 Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, Matthias Hein

As a research community, we are still lacking a systematic understanding of the progress on adversarial robustness which often makes it hard to identify the most promising ideas in training robust models.

Adversarial Robustness Fairness +2

A Critical Evaluation of Open-World Machine Learning

no code implementations8 Jul 2020 Liwei Song, Vikash Sehwag, Arjun Nitin Bhagoji, Prateek Mittal

With our evaluation across 6 OOD detectors, we find that the choice of in-distribution data, model architecture and OOD data have a strong impact on OOD detection performance, inducing false positive rates in excess of $70\%$.

OOD Detection

Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks

no code implementations24 Jun 2020 Vikash Sehwag, Rajvardhan Oak, Mung Chiang, Prateek Mittal

With increasing expressive power, deep neural networks have significantly improved the state-of-the-art on image classification datasets, such as ImageNet.

Image Classification

PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking

2 code implementations17 May 2020 Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, Prateek Mittal

In this paper, we propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches.

FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning

1 code implementation5 Apr 2020 Sameer Wagh, Shruti Tople, Fabrice Benhamouda, Eyal Kushilevitz, Prateek Mittal, Tal Rabin

For private training, we are about 6x faster than SecureNN, 4. 4x faster than ABY3 and about 2-60x more communication efficient.

Systematic Evaluation of Privacy Risks of Machine Learning Models

1 code implementation24 Mar 2020 Liwei Song, Prateek Mittal

Machine learning models are prone to memorizing sensitive data, making them vulnerable to membership inference attacks in which an adversary aims to guess if an input sample was used to train the model.

Inference Attack

Towards Probabilistic Verification of Machine Unlearning

1 code implementation9 Mar 2020 David Marco Sommer, Liwei Song, Sameer Wagh, Prateek Mittal

In this work, we take the first step in proposing a formal framework to study the design of such verification mechanisms for data deletion requests -- also known as machine unlearning -- in the context of systems that provide machine learning as a service (MLaaS).

Two-sample testing

HYDRA: Pruning Adversarially Robust Neural Networks

2 code implementations NeurIPS 2020 Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana

We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.

Network Pruning

Lower Bounds on Adversarial Robustness from Optimal Transport

1 code implementation NeurIPS 2019 Arjun Nitin Bhagoji, Daniel Cullina, Prateek Mittal

In this paper, we use optimal transport to characterize the minimum possible loss in an adversarial classification scenario.

Adversarial Robustness General Classification

Towards Compact and Robust Deep Neural Networks

no code implementations14 Jun 2019 Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana

In this work, we rigorously study the extension of network pruning strategies to preserve both benign accuracy and robustness of a network.

Adversarial Robustness Network Pruning

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

1 code implementation24 May 2019 Liwei Song, Reza Shokri, Prateek Mittal

To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions.

Adversarial Defense Inference Attack

Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples

no code implementations5 May 2019 Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, Mung Chiang, Prateek Mittal

A large body of recent work has investigated the phenomenon of evasion attacks using adversarial examples for deep learning systems, where the addition of norm-bounded perturbations to the test inputs leads to incorrect output classification.

Autonomous Driving General Classification

PAC-learning in the presence of adversaries

no code implementations NeurIPS 2018 Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal

We then explicitly derive the adversarial VC-dimension for halfspace classifiers in the presence of a sample-wise norm-constrained adversary of the type commonly studied for evasion attacks and show that it is the same as the standard VC-dimension, closing an open question.

Analyzing Federated Learning through an Adversarial Lens

1 code implementation ICLR 2019 Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, Seraphin Calo

Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server.

Federated Learning

Robust Website Fingerprinting Through the Cache Occupancy Channel

no code implementations17 Nov 2018 Anatoly Shusterman, Lachlan Kang, Yarden Haskal, Yosef Meltser, Prateek Mittal, Yossi Oren, Yuval Yarom

In this work we investigate these attacks under a different attack model, in which the adversary is capable of running a small amount of unprivileged code on the target user's computer.

Website Fingerprinting Attacks

Partial Recovery of Erdős-Rényi Graph Alignment via $k$-Core Alignment

no code implementations10 Sep 2018 Daniel Cullina, Negar Kiyavash, Prateek Mittal, H. Vincent Poor

This estimator searches for an alignment in which the intersection of the correlated graphs using this alignment has a minimum degree of $k$.

SAQL: A Stream-based Query System for Real-Time Abnormal System Behavior Detection

1 code implementation25 Jun 2018 Peng Gao, Xusheng Xiao, Ding Li, Zhichun Li, Kangkook Jee, Zhen-Yu Wu, Chung Hwan Kim, Sanjeev R. Kulkarni, Prateek Mittal

To facilitate the task of expressing anomalies based on expert knowledge, our system provides a domain-specific query language, SAQL, which allows analysts to express models for (1) rule-based anomalies, (2) time-series anomalies, (3) invariant-based anomalies, and (4) outlier-based anomalies.

Cryptography and Security Databases

PAC-learning in the presence of evasion adversaries

no code implementations5 Jun 2018 Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal

We then explicitly derive the adversarial VC-dimension for halfspace classifiers in the presence of a sample-wise norm-constrained adversary of the type commonly studied for evasion attacks and show that it is the same as the standard VC-dimension, closing an open question.

A Differential Privacy Mechanism Design Under Matrix-Valued Query

1 code implementation26 Feb 2018 Thee Chanyaswad, Alex Dytso, H. Vincent Poor, Prateek Mittal

noise to each element of the matrix, this method is often sub-optimal as it forfeits an opportunity to exploit the structural characteristics typically associated with matrix analysis.

DARTS: Deceiving Autonomous Cars with Toxic Signs

1 code implementation18 Feb 2018 Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, Prateek Mittal

In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS).

Traffic Sign Recognition

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos

1 code implementation9 Jan 2018 Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Prateek Mittal, Mung Chiang

Our attack pipeline generates adversarial samples which are robust to the environmental conditions and noisy image transformations present in the physical world.

Traffic Sign Recognition

MVG Mechanism: Differential Privacy under Matrix-Valued Query

no code implementations2 Jan 2018 Thee Chanyaswad, Alex Dytso, H. Vincent Poor, Prateek Mittal

To address this challenge, we propose a novel differential privacy mechanism called the Matrix-Variate Gaussian (MVG) mechanism, which adds a matrix-valued noise drawn from a matrix-variate Gaussian distribution, and we rigorously prove that the MVG mechanism preserves $(\epsilon,\delta)$-differential privacy.

Coupling Random Orthonormal Projection with Gaussian Generative Model for Non-Interactive Private Data Release

1 code implementation31 Aug 2017 Thee Chanyaswad, Changchang Liu, Prateek Mittal

A key challenge facing the design of differential privacy in the non-interactive setting is to maintain the utility of the released data.

Cryptography and Security

Inaudible Voice Commands

1 code implementation24 Aug 2017 Liwei Song, Prateek Mittal

Voice assistants like Siri enable us to control IoT devices conveniently with voice commands, however, they also provide new attack opportunities for adversaries.

Cryptography and Security

On the Simultaneous Preservation of Privacy and Community Structure in Anonymized Networks

no code implementations25 Mar 2016 Daniel Cullina, Kushagra Singhal, Negar Kiyavash, Prateek Mittal

We ask the question "Does there exist a regime where the network cannot be deanonymized perfectly, yet the community structure could be learned?."

Community Detection Stochastic Block Model

Cannot find the paper you are looking for? You can Submit a new open access paper.