Search Results for author: Yunhan Jia

Found 6 papers, 5 papers with code

Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking

1 code implementation27 May 2019 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +5

Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking

1 code implementation ICLR 2020 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Hao Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +5

Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction

2 code implementations CVPR 2020 Yantao Lu, Yunhan Jia, Jian-Yu Wang, Bai Li, Weiheng Chai, Lawrence Carin, Senem Velipasalar

Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i. e., they remain adversarial even against other models.

Adversarial Attack Image Classification +5

Towards Practical Lottery Ticket Hypothesis for Adversarial Training

1 code implementation6 Mar 2020 Bai Li, Shiqi Wang, Yunhan Jia, Yantao Lu, Zhenyu Zhong, Lawrence Carin, Suman Jana

Recent research has proposed the lottery ticket hypothesis, suggesting that for a deep neural network, there exist trainable sub-networks performing equally or better than the original model with commensurate training steps.

Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction

1 code implementation8 May 2019 Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong, Tao Wei

Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i. e., they maintain their effectiveness even against other models.

Image Classification object-detection +3

Cannot find the paper you are looking for? You can Submit a new open access paper.