Adversarial Attack and Defense on Point Sets

28 Feb 2019  ·  Jiancheng Yang, Qiang Zhang, Rongyao Fang, Bingbing Ni, Jinxian Liu, Qi Tian ·

Emergence of the utility of 3D point cloud data in safety-critical vision tasks (e.g., ADAS) urges researchers to pay more attention to the robustness of 3D representations and deep networks. To this end, we develop an attack and defense scheme, dedicated to 3D point cloud data, for preventing 3D point clouds from manipulated as well as pursuing noise-tolerable 3D representation. A set of novel 3D point cloud attack operations are proposed via pointwise gradient perturbation and adversarial point attachment / detachment. We then develop a flexible perturbation-measurement scheme for 3D point cloud data to detect potential attack data or noisy sensing data. Notably, the proposed defense methods are even effective to detect the adversarial point clouds generated by a proof-of-concept attack directly targeting the defense. Transferability of adversarial attacks between several point cloud networks is addressed, and we propose an momentum-enhanced pointwise gradient to improve the attack transferability. We further analyze the transferability from adversarial point clouds to grid CNNs and the inverse. Extensive experimental results on common point cloud benchmarks demonstrate the validity of the proposed 3D attack and defense framework.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here