no code implementations • 30 Oct 2018 • Mingjie Sun, Jian Tang, Huichen Li, Bo Li, Chaowei Xiao, Yao Chen, Dawn Song
In this paper, we take the task of link prediction as an example, which is one of the most fundamental problems for graph analysis, and introduce a data positioning attack to node embedding methods.
1 code implementation • 8 Oct 2019 • Xiaojun Xu, Qi. Wang, Huichen Li, Nikita Borisov, Carl A. Gunter, Bo Li
To train the meta-model without knowledge of the attack strategy, we introduce a technique called jumbo learning that samples a set of Trojaned models following a general distribution.
no code implementations • CVPR 2020 • Huichen Li, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li
Such adversarial attacks can be achieved by adding a small magnitude of perturbation to the input to mislead model prediction.
no code implementations • 17 Jan 2021 • James Tu, Huichen Li, Xinchen Yan, Mengye Ren, Yun Chen, Ming Liang, Eilyan Bitar, Ersin Yumer, Raquel Urtasun
Yet, there have been limited studies on the adversarial robustness of multi-modal models that fuse LiDAR features with image features.
1 code implementation • 25 Feb 2021 • Huichen Li, Linyi Li, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li
We aim to bridge the gap between the two by investigating how to efficiently estimate gradient based on a projected low-dimensional space.
1 code implementation • 10 Jun 2021 • Jiawei Zhang, Linyi Li, Huichen Li, Xiaolu Zhang, Shuang Yang, Bo Li
In this paper, we show that such efficiency highly depends on the scale at which the attack is applied, and attacking at the optimal scale significantly improves the efficiency.