Adversarial Attack

598 papers with code • 2 benchmarks • 9 datasets

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Libraries

Use these libraries to find Adversarial Attack models and implementations

Revealing Vulnerabilities in Stable Diffusion via Targeted Attacks

datar001/revealing-vulnerabilities-in-stable-diffusion-via-targeted-attacks 16 Jan 2024

In this study, we formulate the problem of targeted adversarial attack on Stable Diffusion and propose a framework to generate adversarial prompts.

6
16 Jan 2024

GE-AdvGAN: Improving the transferability of adversarial samples by gradient editing-based adversarial generative model

lmbtough/ge-advgan 11 Jan 2024

With the functional and characteristic similarity analysis, we introduce a novel gradient editing (GE) mechanism and verify its feasibility in generating transferable samples on various models.

9
11 Jan 2024

AVA: Inconspicuous Attribute Variation-based Adversarial Attack bypassing DeepFake Detection

anonymoususera/ava 14 Dec 2023

While DeepFake applications are becoming popular in recent years, their abuses pose a serious privacy threat.

0
14 Dec 2023

Robust Few-Shot Named Entity Recognition with Boundary Discrimination and Correlation Purification

ckgconstruction/bdcp 13 Dec 2023

However, the present few-shot NER models assume that the labeled data are all clean without noise or outliers, and there are few works focusing on the robustness of the cross-domain transfer learning ability to textual adversarial attacks in Few-shot NER.

2
13 Dec 2023

ScAR: Scaling Adversarial Robustness for LiDAR Object Detection

xiaohulugo/ScAR-Scaling-Adversarial-Robustness-for-LiDAR-Object-Detection 5 Dec 2023

Universal adversarial attack methods such as Fast Sign Gradient Method (FSGM) and Projected Gradient Descend (PGD) are popular for LiDAR object detection, but they are often deficient compared to task-specific adversarial attacks.

3
05 Dec 2023

Adversarial Purification of Information Masking

nowindbutrain/impure 26 Nov 2023

Notably, the residual perturbations on the purified image primarily stem from the same-position patch and similar patches of the adversarial sample.

0
26 Nov 2023

Trainwreck: A damaging adversarial attack on image classifiers

janzahalka/trainwreck 24 Nov 2023

Adversarial attacks are an important security concern for computer vision (CV), as they enable malicious attackers to reliably manipulate CV models.

1
24 Nov 2023

An Extensive Study on Adversarial Attack against Pre-trained Models of Code

cgcl-codes/attack_ptmc 13 Nov 2023

Although several approaches have been proposed to generate adversarial examples for PTMC, the effectiveness and efficiency of such approaches, especially on different code intelligence tasks, has not been well understood.

7
13 Nov 2023

Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selection

akshitjindal1/aot_wacv 8 Nov 2023

In this work, we explore the usage of an ensemble of deep learning models as our thief model.

1
08 Nov 2023

Amoeba: Circumventing ML-supported Network Censorship via Adversarial Reinforcement Learning

mobile-intelligence-lab/amoeba 31 Oct 2023

Specifically, we cast the problem of finding adversarial flows that will be misclassified as a sequence generation task, which we solve with Amoeba, a novel reinforcement learning algorithm that we design.

5
31 Oct 2023