We propose four kinds of backdoor attacks for object detection task: 1) Object Generation Attack: a trigger can falsely generate an object of the target class; 2) Regional Misclassification Attack: a trigger can change the prediction of a surrounding object to the target class; 3) Global Misclassification Attack: a single trigger can change the predictions of all objects in an image to the target class; and 4) Object Disappearance Attack: a trigger can make the detector fail to detect the object of the target class.
Although many efforts have been made in terms of backbone architecture design, loss function, and training techniques, few results have been obtained on how the sampling in latent space can affect the final performance, and existing works on latent space mainly focus on controllability.
However, deep CNNs are vulnerable to adversarial patches, which are physically realizable and stealthy, raising new security concerns on the real-world applications of these models.
In this paper, we show that such efficiency highly depends on the scale at which the attack is applied, and attacking at the optimal scale significantly improves the efficiency.
We aim to bridge the gap between the two by investigating how to efficiently estimate gradient based on a projected low-dimensional space.
Such adversarial attacks can be achieved by adding a small magnitude of perturbation to the input to mislead model prediction.
Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model.
Based on this framework, we demonstrate that SGLD can prevent the information leakage of the training dataset to a certain extent.
In this paper, we aim to understand the generalization properties of generative adversarial networks (GANs) from a new perspective of privacy protection.
(2) privacy leakage: the model trained using a conventional method may involuntarily reveal the private information of the patients in the training dataset.
This paper considers WCE-based gastric ulcer detection, in which the major challenge is to detect the lesions in a local region.
Given an input image from a specified stain, several generators are first applied to estimate its appearances in other staining methods, and a classifier follows to combine visual cues from different stains for prediction (whether it is pathological, or which type of pathology it has).