1 code implementation • 19 Oct 2023 • Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong
As a result, the literature lacks a systematic understanding of prompt injection attacks and their defenses.
1 code implementation • 26 Mar 2023 • Jinyuan Jia, Yupei Liu, Yuepeng Hu, Neil Zhenqiang Gong
PORE can transform any existing recommender system to be provably robust against any untargeted data poisoning attacks, which aim to reduce the overall performance of a recommender system.
no code implementations • 15 Jan 2022 • Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong
A pre-trained encoder may be deemed confidential because its training requires lots of data and computation resources as well as its public release may facilitate misuse of AI, e. g., for deepfakes generation.
3 code implementations • 1 Aug 2021 • Jinyuan Jia, Yupei Liu, Neil Zhenqiang Gong
In particular, our BadEncoder injects backdoors into a pre-trained image encoder such that the downstream classifiers built based on the backdoored image encoder for different downstream tasks simultaneously inherit the backdoor behavior.
no code implementations • 13 Jun 2021 • R. Spencer Hallyburton, Yupei Liu, Yulong Cao, Z. Morley Mao, Miroslav Pajic
Thus, in this work, we perform an analysis of camera-LiDAR fusion, in the AV context, under LiDAR spoofing attacks.
no code implementations • 7 Dec 2020 • Jinyuan Jia, Yupei Liu, Xiaoyu Cao, Neil Zhenqiang Gong
Moreover, our evaluation results on MNIST and CIFAR10 show that the intrinsic certified robustness guarantees of kNN and rNN outperform those provided by state-of-the-art certified defenses.