no code implementations • 20 Mar 2024 • Jinmin Li, Kuofeng Gao, Yang Bai, Jingyun Zhang, Shu-Tao Xia, Yisen Wang
Despite the remarkable performance of video-based large language models (LLMs), their adversarial threat remains unexplored.
1 code implementation • 20 Jan 2024 • Kuofeng Gao, Yang Bai, Jindong Gu, Shu-Tao Xia, Philip Torr, Zhifeng Li, Wei Liu
Once attackers maliciously induce high energy consumption and latency time (energy-latency cost) during inference of VLMs, it will exhaust computational resources.
no code implementations • 26 Nov 2023 • Jiawang Bai, Kuofeng Gao, Shaobo Min, Shu-Tao Xia, Zhifeng Li, Wei Liu
Contrastive Vision-Language Pre-training, known as CLIP, has shown promising effectiveness in addressing downstream image recognition tasks.
1 code implementation • CVPR 2023 • Kuofeng Gao, Yang Bai, Jindong Gu, Yong Yang, Shu-Tao Xia
With the split clean data pool and polluted data pool, ASD successfully defends against backdoor attacks during training.
1 code implementation • 17 Aug 2022 • Kuofeng Gao, Jiawang Bai, Baoyuan Wu, Mengxi Ya, Shu-Tao Xia
Existing attacks often insert some additional points into the point cloud as the trigger, or utilize a linear transformation (e. g., rotation) to construct the poisoned point cloud.
1 code implementation • 27 Jul 2022 • Jiawang Bai, Kuofeng Gao, Dihong Gong, Shu-Tao Xia, Zhifeng Li, Wei Liu
The security of deep neural networks (DNNs) has attracted increasing attention due to their widespread use in various applications.
1 code implementation • 18 Sep 2021 • Kuofeng Gao, Jiawang Bai, Bin Chen, Dongxian Wu, Shu-Tao Xia
To this end, we propose the confusing perturbations-induced backdoor attack (CIBA).