Search Results for author: Arezoo Rajabi

Found 13 papers, 2 papers with code

Game of Trojans: Adaptive Adversaries Against Output-based Trojaned-Model Detectors

no code implementations12 Feb 2024 Dinuka Sahabandu, Xiaojun Xu, Arezoo Rajabi, Luyao Niu, Bhaskar Ramasubramanian, Bo Li, Radha Poovendran

We propose and analyze an adaptive adversary that can retrain a Trojaned DNN and is also aware of SOTA output-based Trojaned model detectors.

MDTD: A Multi Domain Trojan Detector for Deep Neural Networks

1 code implementation30 Aug 2023 Arezoo Rajabi, Surudhi Asokraj, Fengqing Jiang, Luyao Niu, Bhaskar Ramasubramanian, Jim Ritcey, Radha Poovendran

An adversary carrying out a backdoor attack embeds a predefined perturbation called a trigger into a small subset of input samples and trains the DNN such that the presence of the trigger in the input results in an adversary-desired output class.

Backdoor Attack

LDL: A Defense for Label-Based Membership Inference Attacks

no code implementations3 Dec 2022 Arezoo Rajabi, Dinuka Sahabandu, Luyao Niu, Bhaskar Ramasubramanian, Radha Poovendran

Overfitted models have been shown to be susceptible to query-based attacks such as membership inference attacks (MIAs).

Game of Trojans: A Submodular Byzantine Approach

no code implementations13 Jul 2022 Dinuka Sahabandu, Arezoo Rajabi, Luyao Niu, Bo Li, Bhaskar Ramasubramanian, Radha Poovendran

The results show that (i) with Submodular Trojan algorithm, the adversary needs to embed a Trojan trigger into a very small fraction of samples to achieve high accuracy on both Trojan and clean samples, and (ii) the MM Trojan algorithm yields a trained Trojan model that evades detection with probability 1.

Privacy-Preserving Reinforcement Learning Beyond Expectation

no code implementations18 Mar 2022 Arezoo Rajabi, Bhaskar Ramasubramanian, Abdullah Al Maruf, Radha Poovendran

Through empirical evaluations, we highlight a privacy-utility tradeoff and demonstrate that the RL agent is able to learn behaviors that are aligned with that of a human user in the same environment in a privacy-preserving manner

Decision Making Privacy Preserving +2

Adversarial Profiles: Detecting Out-Distribution & Adversarial Samples in Pre-trained CNNs

no code implementations18 Nov 2020 Arezoo Rajabi, Rakesh B. Bobba

Here, we propose a method to detect adversarial and out-distribution examples against a pre-trained CNN without needing to retrain the CNN or needing access to a wide variety of fooling examples.

Adversarial Attack

Toward Adversarial Robustness by Diversity in an Ensemble of Specialized Deep Neural Networks

no code implementations17 May 2020 Mahdieh Abbasi, Arezoo Rajabi, Christian Gagne, Rakesh B. Bobba

Using MNIST and CIFAR-10, we empirically verify the ability of our ensemble to detect a large portion of well-known black-box adversarial examples, which leads to a significant reduction in the risk rate of adversaries, at the expense of a small increase in the risk rate of clean samples.

Adversarial Robustness

Toward Metrics for Differentiating Out-of-Distribution Sets

1 code implementation18 Oct 2019 Mahdieh Abbasi, Changjian Shui, Arezoo Rajabi, Christian Gagne, Rakesh Bobba

We empirically verify that the most protective OOD sets -- selected according to our metrics -- lead to A-CNNs with significantly lower generalization errors than the A-CNNs trained on the least protective ones.

Out of Distribution (OOD) Detection

Controlling Over-generalization and its Effect on Adversarial Examples Detection and Generation

no code implementations ICLR 2019 Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B. Bobba, Christian Gagné

As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples.

Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection

no code implementations21 Aug 2018 Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B. Bobba, Christian Gagne

As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples.

Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning

no code implementations24 Apr 2018 Mahdieh Abbasi, Arezoo Rajabi, Christian Gagné, Rakesh B. Bobba

Detection and rejection of adversarial examples in security sensitive and safety-critical systems using deep CNNs is essential.

Cannot find the paper you are looking for? You can Submit a new open access paper.