1 code implementation • 15 May 2023 • Xiaofei Sun, Xiaoya Li, Jiwei Li, Fei Wu, Shangwei Guo, Tianwei Zhang, Guoyin Wang
This is due to (1) the lack of reasoning ability in addressing complex linguistic phenomena (e. g., intensification, contrast, irony etc); (2) limited number of tokens allowed in in-context learning.
3 code implementations • CVPR 2021 • Wei Gao, Shangwei Guo, Tianwei Zhang, Han Qiu, Yonggang Wen, Yang Liu
Comprehensive evaluations demonstrate that the policies discovered by our method can defeat existing reconstruction attacks in collaborative learning, with high efficiency and negligible impact on the model performance.
2 code implementations • NAACL 2022 • Leilei Gan, Jiwei Li, Tianwei Zhang, Xiaoya Li, Yuxian Meng, Fei Wu, Yi Yang, Shangwei Guo, Chun Fan
To deal with this issue, in this paper, we propose a new strategy to perform textual backdoor attacks which do not require an external trigger, and the poisoned samples are correctly labeled.
1 code implementation • 29 Jan 2024 • Hao Wang, Tao Xiang, Shangwei Guo, Jialing He, Hangcheng Liu, Tianwei Zhang
Adopting untrusted PTMs may suffer from backdoor attacks, where the adversary can compromise the downstream models by injecting backdoors into the PTM.
1 code implementation • 29 Jul 2023 • Ziheng Huang, Boheng Li, Yan Cai, Run Wang, Shangwei Guo, Liming Fang, Jing Chen, Lina Wang
In recent decades, Generative Adversarial Network (GAN) and its variants have achieved unprecedented success in image synthesis.
1 code implementation • ICCV 2023 • Ziheng Huang, Boheng Li, Yan Cai, Run Wang, Shangwei Guo, Liming Fang, Jing Chen, Lina Wang
In recent decades, Generative Adversarial Network (GAN) and its variants have achieved unprecedented success in image synthesis.
no code implementations • 20 Feb 2020 • Shangwei Guo, Tianwei Zhang, Han Yu, Xiaofei Xie, Lei Ma, Tao Xiang, Yang Liu
It guarantees that each benign node in a decentralized system can train a correct model under very strong Byzantine attacks with an arbitrary number of faulty nodes.
no code implementations • 9 Jun 2020 • Kangjie Chen, Shangwei Guo, Tianwei Zhang, Xiaofei Xie, Yang Liu
This paper presents the first model extraction attack against Deep Reinforcement Learning (DRL), which enables an external adversary to precisely recover a black-box DRL model only from its interaction with the environment.
no code implementations • 14 Jun 2020 • Shangwei Guo, Tianwei Zhang, Guowen Xu, Han Yu, Tao Xiang, Yang Liu
In this paper, we design Top-DP, a novel solution to optimize the differential privacy protection of decentralized image classification systems.
no code implementations • 18 Sep 2020 • Shangwei Guo, Tianwei Zhang, Han Qiu, Yi Zeng, Tao Xiang, Yang Liu
In this paper, we propose a novel watermark removal attack from a different perspective.
no code implementations • 13 Dec 2020 • Han Qiu, Yi Zeng, Shangwei Guo, Tianwei Zhang, Meikang Qiu, Bhavani Thuraisingham
In this paper, we investigate the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models' robustness.
no code implementations • 4 Jan 2021 • Tao Xiang, Hangcheng Liu, Shangwei Guo, Tianwei Zhang, Xiaofeng Liao
Based on this property, we identify the discriminative areas of a given clean example easily for local perturbations.
no code implementations • 19 Jun 2021 • Guanlin Li, Guowen Xu, Han Qiu, Shangwei Guo, Run Wang, Jiwei Li, Tianwei Zhang, Rongxing Lu
In this paper, we present the first fingerprinting scheme for the Intellectual Property (IP) protection of GANs.
no code implementations • ICLR 2022 • Kangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang, Jiwei Li, Chun Fan
The key feature of our attack is that the adversary does not need prior information about the downstream tasks when implanting the backdoor to the pre-trained model.
no code implementations • 29 Sep 2021 • Xiaoxuan Lou, Shangwei Guo, Tianwei Zhang, Jiwei Li, Yinqian Zhang, Yang Liu
We present a novel watermarking scheme to achieve the intellectual property (IP) protection and ownership verification of DNN architectures.
no code implementations • ICLR 2022 • Xiaoxuan Lou, Shangwei Guo, Jiwei Li, Yaoxin Wu, Tianwei Zhang
We present NASPY, an end-to-end adversarial framework to extract the networkarchitecture of deep learning models from Neural Architecture Search (NAS).
no code implementations • 30 Nov 2021 • Shangwei Guo, Jun Li, Zhengchao Lai, Xiantong Meng, Shaokun Han
Meanwhile, the transformer-branch performs offset-attention process on the whole point cloud to extract the global feature.
no code implementations • 30 May 2022 • Jun Li, Shangwei Guo, Shaokun Han
Point cloud completion task aims to predict the missing part of incomplete point clouds and generate complete point clouds with details.
no code implementations • 2 Aug 2023 • Xiaobei Yan, Xiaoxuan Lou, Guowen Xu, Han Qiu, Shangwei Guo, Chip Hong Chang, Tianwei Zhang
One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel information, which enables an adversary to preciously recover the model details.
no code implementations • 27 Sep 2023 • Guanlin Li, Yifei Chen, Jie Zhang, Jiwei Li, Shangwei Guo, Tianwei Zhang
We propose Warfare, a unified methodology to achieve both attacks in a holistic way.
no code implementations • 4 Dec 2023 • Guanlin Li, Han Qiu, Shangwei Guo, Jiwei Li, Tianwei Zhang
To the best of our knowledge, it is the first work leveraging the observations of kernel dynamics to improve existing AT methods.