Search Results for author: Huichen Li

Found 6 papers, 3 papers with code

Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation

1 code implementation10 Jun 2021 Jiawei Zhang, Linyi Li, Huichen Li, Xiaolu Zhang, Shuang Yang, Bo Li

In this paper, we show that such efficiency highly depends on the scale at which the attack is applied, and attacking at the optimal scale significantly improves the efficiency.

Face Recognition

Nonlinear Projection Based Gradient Estimation for Query Efficient Blackbox Attacks

1 code implementation25 Feb 2021 Huichen Li, Linyi Li, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li

We aim to bridge the gap between the two by investigating how to efficiently estimate gradient based on a projected low-dimensional space.

Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving

no code implementations17 Jan 2021 James Tu, Huichen Li, Xinchen Yan, Mengye Ren, Yun Chen, Ming Liang, Eilyan Bitar, Ersin Yumer, Raquel Urtasun

Yet, there have been limited studies on the adversarial robustness of multi-modal models that fuse LiDAR features with image features.

Adversarial Robustness Denoising

QEBA: Query-Efficient Boundary-Based Blackbox Attack

no code implementations CVPR 2020 Huichen Li, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li

Such adversarial attacks can be achieved by adding a small magnitude of perturbation to the input to mislead model prediction.

Autonomous Driving Dimensionality Reduction

Detecting AI Trojans Using Meta Neural Analysis

1 code implementation8 Oct 2019 Xiaojun Xu, Qi. Wang, Huichen Li, Nikita Borisov, Carl A. Gunter, Bo Li

To train the meta-model without knowledge of the attack strategy, we introduce a technique called jumbo learning that samples a set of Trojaned models following a general distribution.

Data Poisoning

Data Poisoning Attack against Unsupervised Node Embedding Methods

no code implementations30 Oct 2018 Mingjie Sun, Jian Tang, Huichen Li, Bo Li, Chaowei Xiao, Yao Chen, Dawn Song

In this paper, we take the task of link prediction as an example, which is one of the most fundamental problems for graph analysis, and introduce a data positioning attack to node embedding methods.

Data Poisoning Link Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.