no code implementations • 28 Oct 2023 • Kunlin Cai, Jinghuai Zhang, Will Shand, Zhiqing Hong, Guang Wang, Desheng Zhang, Jianfeng Chi, Yuan Tian
These attacks in our attack suite assume different adversary knowledge and aim to extract different types of sensitive information from mobility data, providing a holistic privacy risk assessment for POI recommendation models.
1 code implementation • 5 May 2023 • Zhengyuan Jiang, Jinghuai Zhang, Neil Zhenqiang Gong
Specifically, a watermark is embedded into an AI-generated content before it is released.
no code implementations • CVPR 2023 • Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong
Existing certified defenses against adversarial point clouds suffer from a key limitation: their certified robustness guarantees are probabilistic, i. e., they produce an incorrect certified robustness guarantee with some probability.
no code implementations • 15 Nov 2022 • Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong
In this work, we take the first step to analyze the limitations of existing backdoor attacks and propose new DPBAs called CorruptEncoder to CL.
1 code implementation • CVPR 2021 • Yicheng Liu, Jinghuai Zhang, Liangji Fang, Qinhong Jiang, Bolei Zhou
Predicting multiple plausible future trajectories of the nearby vehicles is crucial for the safety of autonomous driving.
1 code implementation • 29 Oct 2019 • Zixuan Huang, Jinghuai Zhang, Jing Liao
Recent neural style transfer frameworks have obtained astonishing visual quality and flexibility in Single-style Transfer (SST), but little attention has been paid to Multi-style Transfer (MST) which refers to simultaneously transferring multiple styles to the same image.