Search Results for author: Arjun Nitin Bhagoji

Found 18 papers, 8 papers with code

Traceback of Data Poisoning Attacks in Neural Networks

no code implementations13 Oct 2021 Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao

We propose a novel iterative clustering and pruning solution that trims "innocent" training samples, until all that remains is the set of poisoned data responsible for the attack.

Data Poisoning Malware Classification

Lower Bounds on the Robustness of Fixed Feature Extractors to Test-time Adversaries

no code implementations29 Sep 2021 Arjun Nitin Bhagoji, Daniel Cullina, Ben Zhao

In this paper, we develop a methodology to analyze the robustness of fixed feature extractors, which in turn provide bounds on the robustness of any classifier trained on top of it.

LEAF: Navigating Concept Drift in Cellular Networks

no code implementations7 Sep 2021 Shinan Liu, Francesco Bronzino, Paul Schmitt, Arjun Nitin Bhagoji, Nick Feamster, Hector Garcia Crespo, Timothy Coyle, Brian Ward

Thus, it is not well-understood how to detect or mitigate it for many common network management tasks that currently rely on machine learning models.

Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries

1 code implementation16 Apr 2021 Arjun Nitin Bhagoji, Daniel Cullina, Vikash Sehwag, Prateek Mittal

In particular, it is critical to determine classifier-agnostic bounds on the training loss to establish when learning is possible.

A Real-time Defense against Website Fingerprinting Attacks

no code implementations8 Feb 2021 Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao

We experimentally demonstrate that Dolos provides 94+% protection against state-of-the-art WF attacks under a variety of settings.

Website Fingerprinting Attacks Cryptography and Security

A Critical Evaluation of Open-World Machine Learning

no code implementations8 Jul 2020 Liwei Song, Vikash Sehwag, Arjun Nitin Bhagoji, Prateek Mittal

With our evaluation across 6 OOD detectors, we find that the choice of in-distribution data, model architecture and OOD data have a strong impact on OOD detection performance, inducing false positive rates in excess of $70\%$.

OOD Detection

PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking

2 code implementations17 May 2020 Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, Prateek Mittal

In this paper, we propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches.

Lower Bounds on Adversarial Robustness from Optimal Transport

1 code implementation NeurIPS 2019 Arjun Nitin Bhagoji, Daniel Cullina, Prateek Mittal

In this paper, we use optimal transport to characterize the minimum possible loss in an adversarial classification scenario.

Adversarial Robustness General Classification

Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples

no code implementations5 May 2019 Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, Mung Chiang, Prateek Mittal

A large body of recent work has investigated the phenomenon of evasion attacks using adversarial examples for deep learning systems, where the addition of norm-bounded perturbations to the test inputs leads to incorrect output classification.

Autonomous Driving General Classification

PAC-learning in the presence of adversaries

no code implementations NeurIPS 2018 Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal

We then explicitly derive the adversarial VC-dimension for halfspace classifiers in the presence of a sample-wise norm-constrained adversary of the type commonly studied for evasion attacks and show that it is the same as the standard VC-dimension, closing an open question.

Analyzing Federated Learning through an Adversarial Lens

1 code implementation ICLR 2019 Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, Seraphin Calo

Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server.

Federated Learning

Practical Black-box Attacks on Deep Neural Networks using Efficient Query Mechanisms

no code implementations ECCV 2018 Arjun Nitin Bhagoji, Warren He, Bo Li, Dawn Song

An iterative variant of our attack achieves close to 100% attack success rates for both targeted and untargeted attacks on DNNs.

PAC-learning in the presence of evasion adversaries

no code implementations5 Jun 2018 Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal

We then explicitly derive the adversarial VC-dimension for halfspace classifiers in the presence of a sample-wise norm-constrained adversary of the type commonly studied for evasion attacks and show that it is the same as the standard VC-dimension, closing an open question.

DARTS: Deceiving Autonomous Cars with Toxic Signs

1 code implementation18 Feb 2018 Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, Prateek Mittal

In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS).

Traffic Sign Recognition

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos

1 code implementation9 Jan 2018 Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Prateek Mittal, Mung Chiang

Our attack pipeline generates adversarial samples which are robust to the environmental conditions and noisy image transformations present in the physical world.

Traffic Sign Recognition

Exploring the Space of Black-box Attacks on Deep Neural Networks

1 code implementation ICLR 2018 Arjun Nitin Bhagoji, Warren He, Bo Li, Dawn Song

An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs.

Cannot find the paper you are looking for? You can Submit a new open access paper.