Search Results for author: Yuanyu Wang

Found 2 papers, 0 papers with code

DTA: Distribution Transform-based Attack for Query-Limited Scenario

no code implementations12 Dec 2023 Renyang Liu, Wei Zhou, Xin Jin, Song Gao, Yuanyu Wang, Ruxin Wang

In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models by repeatedly querying until the attack is successful, which usually results in thousands of trials during an attack.

Hard-label Attack

AFLOW: Developing Adversarial Examples under Extremely Noise-limited Settings

no code implementations15 Oct 2023 Renyang Liu, Jinhong Zhang, Haoran Li, Jin Zhang, Yuanyu Wang, Wei Zhou

Extensive studies have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.