no code implementations • 10 Sep 2024 • QIUJING LU, Xuanhan Wang, Yiwei Jiang, Guangming Zhao, Mingyue Ma, Shuo Feng
A method that facilitates easily controllable scenario generation for efficient autonomous vehicles (AV) testing with realistic and challenging situations is greatly needed.
no code implementations • 20 Mar 2024 • Ruoxuan Bai, Jingxuan Yang, Weiduo Gong, Yi Zhang, QIUJING LU, Shuo Feng
The complexity of predicting criticality arises from the extreme data imbalance caused by rare events in high dimensional variables associated with the rare events, a challenge we refer to as the curse of rarity.
1 code implementation • 17 Jul 2022 • QIUJING LU, YiPeng Zhang, Mingjian Lu, Vwani Roychowdhury
We propose a novel framework, On-Demand MOtion Generation (ODMO), for generating realistic and diverse long-term 3D human motion sequences conditioned only on action types with an additional capability of customization.
Ranked #1 on Human action generation on UESTC RGB-D
no code implementations • 10 May 2022 • QIUJING LU, Weiqiao Han, Jeffrey Ling, Minfa Wang, Haoyu Chen, Balakrishnan Varadarajan, Paul Covington
Predicting future trajectories of road agents is a critical task for autonomous driving.
no code implementations • 6 May 2022 • Arash Vahabpour, Tianyi Wang, QIUJING LU, Omead Pooladzandi, Vwani Roychowdhury
Imitation learning is the task of replicating expert policy from demonstrations, without access to a reward function.
no code implementations • 29 Sep 2021 • Arash Vahabpour, QIUJING LU, Tianyi Wang, Omead Pooladzandi, Vwani Roychowhury
To address this problem, we introduce a novel generative model for behavior cloning, in a mode-separating manner.