Adversarial Defense
179 papers with code • 10 benchmarks • 5 datasets
Competitions with currently unpublished results:
Libraries
Use these libraries to find Adversarial Defense models and implementationsLatest papers
Revisiting Adversarial Training under Long-Tailed Distributions
Extensive experiments further corroborate that data augmentation alone can significantly improve robustness.
A Simple and Yet Fairly Effective Defense for Graph Neural Networks
Successful combinations of our NoisyGNN approach with existing defense techniques demonstrate even further improved adversarial defense results.
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
The CLIP model, or one of its variants, is used as a frozen vision encoder in many vision-language models (VLMs), e. g. LLaVA and OpenFlamingo.
Detection and Defense of Unlearnable Examples
Detectability of unlearnable examples with simple networks motivates us to design a novel defense method.
Robust MRI Reconstruction by Smoothed Unrolling (SMUG)
To address this problem, we propose a novel image reconstruction framework, termed Smoothed Unrolling (SMUG), which advances a deep unrolling-based MRI reconstruction model using a randomized smoothing (RS)-based robust learning approach.
Defense Against Adversarial Attacks using Convolutional Auto-Encoders
Deep learning models, while achieving state-of-the-art performance on many tasks, are susceptible to adversarial attacks that exploit inherent vulnerabilities in their architectures.
Learn from the Past: A Proxy Guided Adversarial Defense Framework with Self Distillation Regularization
Adversarial Training (AT), pivotal in fortifying the robustness of deep learning models, is extensively adopted in practical applications.
Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria
Deep neural networks are vulnerable to adversarial noise.
DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training
Our extensive experiments show that DeepZero achieves state-of-the-art (SOTA) accuracy on ResNet-20 trained on CIFAR-10, approaching FO training performance for the first time.
Revisiting Adversarial Robustness Distillation from the Perspective of Robust Fairness
In this paper, we first investigate the inheritance of robust fairness during ARD and reveal that student models only partially inherit robust fairness from teacher models.