Real-World Adversarial Attack
11 papers with code • 0 benchmarks • 0 datasets
Adversarial attacks that are presented in the real world
These leaderboards are used to track progress in Real-World Adversarial Attack
In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions.
In this study, we present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques to manipulate the input data stream in real time.
We use the framework to create a patch for an everyday scene and evaluate its performance using a novel evaluation process that ensures that our results are reproducible in both the digital space and the real world.
In authentication scenarios, applications of practical speaker verification systems usually require a person to read a dynamic authentication text.
In our experiments, we examined the transferability of our adversarial mask to a wide range of FR model architectures and datasets.
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection
In addition, we design a robust shape completion algorithm, which is guaranteed to remove the entire patch from the images if the outputs of the patch segmenter are within a certain Hamming distance of the ground-truth patch masks.
Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications.
Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency.
We introduce flying adversarial patches, where multiple images are mounted on at least one other flying robot and therefore can be placed anywhere in the field of view of a victim multirotor.