Search Results for author: Zhenyu Zhong

Found 8 papers, 5 papers with code

Detecting Multi-Sensor Fusion Errors in Advanced Driver-Assistance Systems

3 code implementations14 Sep 2021 Ziyuan Zhong, Zhisheng Hu, Shengjian Guo, Xinyang Zhang, Zhenyu Zhong, Baishakhi Ray

We define the failures (e. g., car crashes) caused by the faulty MSF as fusion errors and develop a novel evolutionary-based domain-specific search framework, FusED, for the efficient detection of fusion errors.

Autonomous Driving

Coverage-based Scene Fuzzing for Virtual Autonomous Driving Testing

no code implementations2 Jun 2021 Zhisheng Hu, Shengjian Guo, Zhenyu Zhong, Kang Li

Simulation-based virtual testing has become an essential step to ensure the safety of autonomous driving systems.

Autonomous Driving

Towards Practical Lottery Ticket Hypothesis for Adversarial Training

1 code implementation6 Mar 2020 Bai Li, Shiqi Wang, Yunhan Jia, Yantao Lu, Zhenyu Zhong, Lawrence Carin, Suman Jana

Recent research has proposed the lottery ticket hypothesis, suggesting that for a deep neural network, there exist trainable sub-networks performing equally or better than the original model with commensurate training steps.

Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking

1 code implementation ICLR 2020 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Hao Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +4

Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking

1 code implementation27 May 2019 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +4

Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction

1 code implementation8 May 2019 Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong, Tao Wei

Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i. e., they maintain their effectiveness even against other models.

Image Classification object-detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.