Backdoor Attack

141 papers with code • 0 benchmarks • 0 datasets

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Libraries

Use these libraries to find Backdoor Attack models and implementations
3 papers
44
2 papers
75

Most implemented papers

Hidden Trigger Backdoor Attacks

UMBCvision/Hidden-Trigger-Backdoor-Attacks 30 Sep 2019

Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time.

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

bboylyg/NAD ECCV 2020

A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data.

BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning

jjy1994/BadEncoder 1 Aug 2021

In particular, our BadEncoder injects backdoors into a pre-trained image encoder such that the downstream classifiers built based on the backdoored image encoder for different downstream tasks simultaneously inherit the backdoor behavior.

FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis

hazardfy/fiba CVPR 2022

However, designing a unified BA method that can be applied to various MIA systems is challenging due to the diversity of imaging modalities (e. g., X-Ray, CT, and MRI) and analysis tasks (e. g., classification, detection, and segmentation).

An Embarrassingly Simple Backdoor Attack on Self-supervised Learning

meet-cjli/ctrl ICCV 2023

As a new paradigm in machine learning, self-supervised learning (SSL) is capable of learning high-quality representations of complex data without relying on labels.

DBA: Distributed Backdoor Attacks against Federated Learning

AI-secure/DBA ICLR 2020

Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data.

Backdoor Attacks to Graph Neural Networks

zaixizhang/graphbackdoor 19 Jun 2020

Specifically, we propose a \emph{subgraph based backdoor attack} to GNN for graph classification.

Graph Backdoor

HarrialX/GraphBackdoor 21 Jun 2020

One intriguing property of deep neural networks (DNNs) is their inherent vulnerability to backdoor attacks -- a trojan model responds to trigger-embedded inputs in a highly predictable manner while functioning normally otherwise.

Embedding and Extraction of Knowledge in Tree Ensemble Classifiers

havelhuang/EKiML-embed-knowledge-into-ML-model 16 Oct 2020

Whilst, as the increasing use of machine learning models in security-critical applications, the embedding and extraction of malicious knowledge are equivalent to the notorious backdoor attack and its defence, respectively.

ONION: A Simple and Effective Defense Against Textual Backdoor Attacks

thunlp/ONION EMNLP 2021

Nevertheless, there are few studies on defending against textual backdoor attacks.