backdoor defense

57 papers with code • 0 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis

hazardfy/fiba CVPR 2022

However, designing a unified BA method that can be applied to various MIA systems is challenging due to the diversity of imaging modalities (e. g., X-Ray, CT, and MRI) and analysis tasks (e. g., classification, detection, and segmentation).

ONION: A Simple and Effective Defense Against Textual Backdoor Attacks

thunlp/ONION EMNLP 2021

Nevertheless, there are few studies on defending against textual backdoor attacks.

LIRA: Learnable, Imperceptible and Robust Backdoor Attacks

pibo16/backdoor_attacks ICCV 2021

Under this optimization framework, the trigger generator function will learn to manipulate the input with imperceptible noise to preserve the model performance on the clean data and maximize the attack success rate on the poisoned data.

Backdoor Defense via Decoupling the Training Process

sclbd/dbd ICLR 2022

Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples.

Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples

shawkui/Shared_Adversarial_Unlearning NeurIPS 2023

By establishing the connection between backdoor risk and adversarial risk, we derive a novel upper bound for backdoor risk, which mainly captures the risk on the shared adversarial examples (SAEs) between the backdoored model and the purified model.

Beating Backdoor Attack at Its Own Game

minliu01/non-adversarial_backdoor ICCV 2023

Deep neural networks (DNNs) are vulnerable to backdoor attack, which does not affect the network's performance on clean data but would manipulate the network behavior once a trigger pattern is added.

Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor

sclbd/backdoorbench 25 May 2024

Specifically, PDB leverages the home-field advantage of defenders by proactively injecting a defensive backdoor into the model during training.

Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense

aisafety-hkust/stable_backdoor_purification 13 Oct 2024

We find that current safety purification methods are vulnerable to the rapid re-learning of backdoor behavior, even when further fine-tuning of purified models is performed using a very small number of poisoned samples.

REFINE: Inversion-Free Backdoor Defense via Model Reprogramming

thuyimingli/backdoorbox 22 Feb 2025

Backdoor attacks on deep neural networks (DNNs) have emerged as a significant security threat, allowing adversaries to implant hidden malicious behaviors during the model training phase.

Clean-Label Backdoor Attacks on Video Recognition Models

ShihaoZhaoZSH/Video-Backdoor-Attack CVPR 2020

We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models, a situation where backdoor attacks are likely to be challenged by the above 4 strict conditions.