no code implementations • 28 Nov 2023 • YiXuan Wang, Ruochen Jiao, Sinong Simon Zhan, Chengtian Lang, Chao Huang, Zhaoran Wang, Zhuoran Yang, Qi Zhu
Autonomous Driving (AD) encounters significant safety hurdles in long-tail unforeseen driving scenarios, largely stemming from the non-interpretability and poor generalization of the deep neural networks within the AD system, particularly in out-of-distribution and uncertain data.
1 code implementation • 3 Nov 2023 • Simon Sinong Zhan, YiXuan Wang, Qingyuan Wu, Ruochen Jiao, Chao Huang, Qi Zhu
In the context of safe exploration, Reinforcement Learning (RL) has long grappled with the challenges of balancing the tradeoff between maximizing rewards and minimizing safety violations, particularly in complex environments with contact-rich or non-smooth dynamics, and when dealing with high-dimensional pixel observations.
no code implementations • 17 Sep 2023 • Ruochen Jiao, YiXuan Wang, Xiangguo Liu, Chao Huang, Qi Zhu
However, it remains a challenging problem for these methods to ensure that the generated/predicted trajectories are physically realistic.
no code implementations • 9 Mar 2023 • Ruochen Jiao, Juyang Bai, Xiangguo Liu, Takami Sato, Xiaowei Yuan, Qi Alfred Chen, Qi Zhu
We conduct extensive experiments to demonstrate that our supervised method based on contrastive learning and unsupervised method based on reconstruction with semantic latent space can significantly improve the performance of anomalous trajectory detection in their corresponding settings over various baseline methods.
1 code implementation • ICCV 2023 • Ruochen Jiao, Xiangguo Liu, Takami Sato, Qi Alfred Chen, Qi Zhu
In this paper, we present a novel adversarial training method for trajectory prediction.
no code implementations • 29 Sep 2022 • YiXuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu
It is quite challenging to ensure the safety of reinforcement learning (RL) agents in an unknown and stochastic environment under hard constraints that require the system state not to reach certain specified unsafe regions.
no code implementations • 15 Aug 2022 • Zhilu Wang, YiXuan Wang, Feisi Fu, Ruochen Jiao, Chao Huang, Wenchao Li, Qi Zhu
Moreover, GROCET provides differentiable global robustness, which is leveraged in the training of globally robust neural networks.
no code implementations • 27 May 2022 • Ruochen Jiao, Xiangguo Liu, Takami Sato, Qi Alfred Chen, Qi Zhu
In addition, experiments show that our method can significantly improve the system's robust generalization to unseen patterns of attacks.
no code implementations • 2 Mar 2022 • Ruochen Jiao, Xiangguo Liu, Bowen Zheng, Dave Liang, Qi Zhu
Our model addresses trajectory generation and prediction in a unified architecture and benefits both tasks: the model can generate diverse, controllable and realistic trajectories to enhance planner optimization in safety-critical and long-tailed scenarios, and it can provide prediction of critical behavior in addition to the final trajectories for decision making.
no code implementations • 27 Feb 2021 • Ruochen Jiao, Hengyi Liang, Takami Sato, Junjie Shen, Qi Alfred Chen, Qi Zhu
The experiment results demonstrate that our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.