no code implementations • 10 Jun 2023 • Ziyuan Zhong, Davis Rempe, Yuxiao Chen, Boris Ivanovic, Yulong Cao, Danfei Xu, Marco Pavone, Baishakhi Ray
Realistic and controllable traffic simulation is a core capability that is necessary to accelerate autonomous vehicle (AV) development.
no code implementations • 31 Oct 2022 • Ziyuan Zhong, Davis Rempe, Danfei Xu, Yuxiao Chen, Sushant Veer, Tong Che, Baishakhi Ray, Marco Pavone
Controllable and realistic traffic simulation is critical for developing and verifying autonomous vehicles.
1 code implementation • 24 Mar 2022 • Ziyuan Zhong, Yuchi Tian, Conor J. Sweeney, Vicente Ordonez, Baishakhi Ray
In particular, it can repair confusion error and bias error of DNN models for both single-label and multi-label image classifications.
no code implementations • 2 Dec 2021 • Ziyuan Zhong, Yun Tang, Yuan Zhou, Vania de Oliveira Neves, Yang Liu, Baishakhi Ray
To bridge this gap, in this work, we provide a generic formulation of scenario-based testing in high-fidelity simulation and conduct a literature review on the existing works.
3 code implementations • 14 Sep 2021 • Ziyuan Zhong, Zhisheng Hu, Shengjian Guo, Xinyang Zhang, Zhenyu Zhong, Baishakhi Ray
We define the failures (e. g., car crashes) caused by the faulty MSF as fusion errors and develop a novel evolutionary-based domain-specific search framework, FusED, for the efficient detection of fusion errors.
1 code implementation • 13 Sep 2021 • Ziyuan Zhong, Gail Kaiser, Baishakhi Ray
Self-driving cars and trucks, autonomous vehicles (AVs), should not be accepted by regulatory bodies and the public until they have much higher confidence in their safety and reliability -- which can most practically and convincingly be achieved by testing.
1 code implementation • 9 Oct 2020 • Ziyuan Zhong, Yuchi Tian, Baishakhi Ray
To this end, we study the local per-input robustness properties of the DNNs and leverage those properties to build a white-box (DeepRobust-W) and a black-box (DeepRobust-B) tool to automatically identify the non-robust points.
1 code implementation • NeurIPS 2019 • Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, Baishakhi Ray
Deep networks are well-known to be fragile to adversarial attacks.
1 code implementation • 20 May 2019 • Yuchi Tian, Ziyuan Zhong, Vicente Ordonez, Gail Kaiser, Baishakhi Ray
We found that many of the reported erroneous cases in popular DNN image classifiers occur because the trained models confuse one class with another or show biases towards some classes over others.
1 code implementation • NeurIPS 2019 • Alexandre Louis Lamy, Ziyuan Zhong, Aditya Krishna Menon, Nakul Verma
We finally show that our procedure is empirically effective on two case-studies involving sensitive feature censoring.