Search Results for author: Anshuman Suri

Found 18 papers, 12 papers with code

SoK: Pitfalls in Evaluating Black-Box Attacks

1 code implementation26 Oct 2023 Fnu Suya, Anshuman Suri, Tingwei Zhang, Jingtao Hong, Yuan Tian, David Evans

However, these works make different assumptions on the adversary's knowledge and current literature lacks a cohesive organization centered around the threat model.

SoK: Memorization in General-Purpose Large Language Models

no code implementations24 Oct 2023 Valentin Hartmann, Anshuman Suri, Vincent Bindschaedler, David Evans, Shruti Tople, Robert West

A major part of this success is due to their huge training datasets and the unprecedented number of model parameters, which allow them to memorize large amounts of information contained in the training data.

Memorization Question Answering

Manipulating Transfer Learning for Property Inference

1 code implementation CVPR 2023 Yulong Tian, Fnu Suya, Anshuman Suri, Fengyuan Xu, David Evans

We demonstrate attacks in which an adversary can manipulate the upstream model to conduct highly effective and specific property inference attacks (AUC score $> 0. 9$), without incurring significant performance loss on the main task.

Transfer Learning

Dissecting Distribution Inference

2 code implementations15 Dec 2022 Anshuman Suri, Yifu Lu, Yanjin Chen, David Evans

A distribution inference attack aims to infer statistical properties of data used to train machine learning models.

Inference Attack

Subject Membership Inference Attacks in Federated Learning

no code implementations7 Jun 2022 Anshuman Suri, Pallika Kanani, Virendra J. Marathe, Daniel W. Peterson

Using these attacks, we estimate subject membership inference risk on real-world data for single-party models as well as FL scenarios.

Federated Learning

Formalizing and Estimating Distribution Inference Risks

2 code implementations13 Sep 2021 Anshuman Suri, David Evans

Distribution inference attacks can pose serious risks when models are trained on private data, but are difficult to distinguish from the intrinsic purpose of statistical machine learning -- namely, to produce models that capture statistical properties about a distribution.

Inference Attack

Formalizing Distribution Inference Risks

2 code implementations7 Jun 2021 Anshuman Suri, David Evans

Property inference attacks reveal statistical properties about a training set but are difficult to distinguish from the primary purposes of statistical machine learning, which is to produce models that capture statistical properties about a distribution.

Model-Targeted Poisoning Attacks with Provable Convergence

1 code implementation30 Jun 2020 Fnu Suya, Saeed Mahloujifar, Anshuman Suri, David Evans, Yuan Tian

Our attack is the first model-targeted poisoning attack that provides provable convergence for convex models, and in our experiments, it either exceeds or matches state-of-the-art attacks in terms of attack success rate and distance to the target model.

One Neuron to Fool Them All

1 code implementation20 Mar 2020 Anshuman Suri, David Evans

Despite vast research in adversarial examples, the root causes of model susceptibility are not well understood.

A Trustworthy, Responsible and Interpretable System to Handle Chit Chat in Conversational Bots

no code implementations19 Nov 2018 Parag Agrawal, Anshuman Suri, Tulasi Menon

Our work introduces a pipeline for query understanding in chitchat using hierarchical intents as well as a way to use seq-seq auto-generation models in professional bots.

Response Generation

Hardening Deep Neural Networks via Adversarial Model Cascades

1 code implementation2 Feb 2018 Deepak Vijaykeerthy, Anshuman Suri, Sameep Mehta, Ponnurangam Kumaraguru

Deep neural networks (DNNs) are vulnerable to malicious inputs crafted by an adversary to produce erroneous outputs.

Cannot find the paper you are looking for? You can Submit a new open access paper.