Adversarial Attack

596 papers with code • 2 benchmarks • 9 datasets

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Libraries

Use these libraries to find Adversarial Attack models and implementations

Latest papers with no code

CosalPure: Learning Concept from Group Images for Robust Co-Saliency Detection

no code yet • 27 Mar 2024

In this paper, we propose a novel robustness enhancement framework by first learning the concept of the co-salient objects based on the input group images and then leveraging this concept to purify adversarial perturbations, which are subsequently fed to CoSODs for robustness enhancement.

Uncertainty-Aware SAR ATR: Defending Against Adversarial Attacks via Bayesian Neural Networks

no code yet • 27 Mar 2024

Adversarial attacks have demonstrated the vulnerability of Machine Learning (ML) image classifiers in Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems.

Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking

no code yet • 21 Mar 2024

In Virtual Reality (VR), adversarial attack remains a significant security threat.

FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMs

no code yet • 20 Mar 2024

Despite the remarkable performance of video-based large language models (LLMs), their adversarial threat remains unexplored.

DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation

no code yet • 20 Mar 2024

Dataset distillation is an advanced technique aimed at compressing datasets into significantly smaller counterparts, while preserving formidable training performance.

Capsule Neural Networks as Noise Stabilizer for Time Series Data

no code yet • 20 Mar 2024

In this paper, we investigate the effectiveness of CapsNets in analyzing highly sensitive and noisy time series sensor data.

As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?

no code yet • 19 Mar 2024

Foundation models pre-trained on web-scale vision-language data, such as CLIP, are widely used as cornerstones of powerful machine learning systems.

LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model

no code yet • 18 Mar 2024

Benefiting from the popularity and scalably usability of Segment Anything Model (SAM), we first extract different regions according to semantic information and then track them through the video stream to maintain the temporal consistency.

Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM

no code yet • 18 Mar 2024

Current defense strategies usually train DNNs for a specific adversarial attack method and can achieve good robustness in defense against this type of adversarial attack.

SSCAE -- Semantic, Syntactic, and Context-aware natural language Adversarial Examples generator

no code yet • 18 Mar 2024

SSCAE outperforms the existing models in all experiments while maintaining a higher semantic consistency with a lower query number and a comparable perturbation rate.