Hard-label Attack
4 papers with code • 2 benchmarks • 2 datasets
Most implemented papers
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input.
RayS: A Ray Searching Method for Hard-label Adversarial Attack
Deep neural networks are vulnerable to adversarial attacks.
Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks
In this paper, we propose a novel geometric-based approach called Tangent Attack (TA), which identifies an optimal tangent point of a virtual hemisphere located on the decision boundary to reduce the distortion of the attack.
TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack
Existing textual adversarial attacks usually utilize the gradient or prediction confidence to generate adversarial examples, making it hard to be deployed in real-world applications.