Search Results for author: Renyang Liu

Found 10 papers, 0 papers with code

STBA: Towards Evaluating the Robustness of DNNs for Query-Limited Black-box Scenario

no code implementations30 Mar 2024 Renyang Liu, Kwok-Yan Lam, Wei Zhou, Sixing Wu, Jun Zhao, Dongting Hu, Mingming Gong

Many attack techniques have been proposed to explore the vulnerability of DNNs and further help to improve their robustness.

SSTA: Salient Spatially Transformed Attack

no code implementations12 Dec 2023 Renyang Liu, Wei Zhou, Sixin Wu, Jun Zhao, Kwok-Yan Lam

Extensive studies have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial attacks, which brings a huge security risk to the further application of DNNs, especially for the AI models developed in the real world.

DTA: Distribution Transform-based Attack for Query-Limited Scenario

no code implementations12 Dec 2023 Renyang Liu, Wei Zhou, Xin Jin, Song Gao, Yuanyu Wang, Ruxin Wang

In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models by repeatedly querying until the attack is successful, which usually results in thousands of trials during an attack.

Hard-label Attack

Double-Flow-based Steganography without Embedding for Image-to-Image Hiding

no code implementations25 Nov 2023 Bingbing Song, Derui Wang, Tianwei Zhang, Renyang Liu, Yu Lin, Wei Zhou

Hence, it provides a way to directly generate stego images from secret images without a cover image.

Steganalysis

AFLOW: Developing Adversarial Examples under Extremely Noise-limited Settings

no code implementations15 Oct 2023 Renyang Liu, Jinhong Zhang, Haoran Li, Jin Zhang, Yuanyu Wang, Wei Zhou

Extensive studies have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial attacks.

SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack

no code implementations15 Oct 2023 Renyang Liu, Jinhong Zhang, Kwok-Yan Lam, Jun Zhao, Wei Zhou

However, the distribution of these fake data lacks diversity and cannot detect the decision boundary of the target model well, resulting in the dissatisfactory simulation effect.

Model extraction

Can LSH (Locality-Sensitive Hashing) Be Replaced by Neural Network?

no code implementations15 Oct 2023 Renyang Liu, Jun Zhao, Xing Chu, Yu Liang, Wei Zhou, Jing He

With the rapid development of GPU (Graphics Processing Unit) technologies and neural networks, we can explore more appropriate data structures and algorithms.

Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks

no code implementations15 Oct 2023 Renyang Liu, Wei Zhou, Jinhong Zhang, Xiaoyuan Liu, Peiyuan Si, Haoran Li

Inspired by this, we propose a novel model inversion attack method on HomoGNNs and HeteGNNs, namely HomoGMI and HeteGMI.

Boosting Black-box Attack to Deep Neural Networks with Conditional Diffusion Models

no code implementations11 Oct 2023 Renyang Liu, Wei Zhou, Tianwei Zhang, Kangjie Chen, Jun Zhao, Kwok-Yan Lam

Existing black-box attacks have demonstrated promising potential in creating adversarial examples (AE) to deceive deep learning models.

Denoising

Improving robustness of softmax corss-entropy loss via inference information

no code implementations1 Jan 2021 Bingbing Song, wei he, Renyang Liu, Shui Yu, Ruxin Wang, Mingming Gong, Tongliang Liu, Wei Zhou

Several state-of-the-arts start from improving the inter-class separability of training samples by modifying loss functions, where we argue that the adversarial samples are ignored and thus limited robustness to adversarial attacks is resulted.

Cannot find the paper you are looking for? You can Submit a new open access paper.