no code implementations • 13 Mar 2025 • Siyin Wang, Zhaoye Fei, Qinyuan Cheng, Shiduo Zhang, Panpan Cai, Jinlan Fu, Xipeng Qiu
Recent advances in large vision-language models (LVLMs) have shown promise for embodied task planning, yet they struggle with fundamental challenges like dependency constraints and efficiency.
no code implementations • 27 Sep 2024 • Xuanjin Jin, Chendong Zeng, Shengfa Zhu, Chunxiao Liu, Panpan Cai
To enhance safety and robustness, the planner further applies importance sampling to refine the driving trajectory conditioned on the planned high-level behavior.
1 code implementation • 31 Aug 2024 • Kunming Su, Qiuxia Wu, Panpan Cai, Xiaogang Zhu, Xuequan Lu, Zhiyong Wang, Kun Hu
Finally, the predictor predicts the latent features of the masked patches using the output latent embeddings from the student, supervised by the outputs from the teacher.
no code implementations • 24 Aug 2024 • Haiyao Cao, Zhen Zhang, Panpan Cai, Yuhang Liu, Jinan Zou, Ehsan Abbasnejad, Biwei Huang, Mingming Gong, Anton Van Den Hengel, Javen Qinfeng Shi
We revisit this research line and find that incorporating RL-specific context can reduce unnecessary assumptions in previous identifiability analyses for latent states.
2 code implementations • 19 Aug 2024 • Ruiqi Zhang, Jing Hou, Florian Walter, Shangding Gu, Jiayi Guan, Florian Röhrbein, Yali Du, Panpan Cai, Guang Chen, Alois Knoll
Reinforcement Learning (RL) is a potent tool for sequential decision-making and has achieved performance surpassing human capabilities across many challenging real-world tasks.
no code implementations • 12 Mar 2023 • Yiyuan Lee, Katie Lee, Panpan Cai, David Hsu, Lydia E. Kavraki
Identifying internal parameters for planning is crucial to maximizing the performance of a planner.
1 code implementation • 23 Sep 2022 • Mohamad H. Danesh, Panpan Cai, David Hsu
To address this, we propose a new algorithm, LEarning Attention over Driving bEhavioRs (LEADER), that learns to attend to critical human behaviors during planning.
no code implementations • 11 Jan 2021 • Panpan Cai, David Hsu
To achieve real-time performance for large-scale planning, this work introduces a new algorithm Learning from Tree Search for Driving (LeTS-Drive), which integrates planning and learning in a closed loop, and applies it to autonomous driving in crowded urban traffic in simulation.
Autonomous Driving
Robotics
1 code implementation • 7 Nov 2020 • Yiyuan Lee, Panpan Cai, David Hsu
The partially observable Markov decision process (POMDP) is a principled general framework for robot decision making under uncertainty, but POMDP planning suffers from high computational complexity, when long-term planning is required.
3 code implementations • 11 Nov 2019 • Panpan Cai, Yiyuan Lee, Yuanfu Luo, David Hsu
Autonomous driving in an unregulated urban crowd is an outstanding challenge, especially, in the presence of many aggressive, high-speed traffic participants.
Robotics Multiagent Systems
1 code implementation • 4 Jun 2019 • Yuanfu Luo, Panpan Cai, Yiyuan Lee, David Hsu
Further, the computational efficiency and the flexibility of GAMMA enable (i) simulation of mixed urban traffic at many locations worldwide and (ii) planning for autonomous driving in dense traffic with uncertain driver behaviors, both in real-time.
no code implementations • 29 May 2019 • Panpan Cai, Yuanfu Luo, Aseem Saxena, David Hsu, Wee Sun Lee
LeTS-Drive leverages the robustness of planning and the runtime efficiency of learning to enhance the performance of both.
no code implementations • 30 May 2018 • Yuanfu Luo, Panpan Cai, Aniket Bera, David Hsu, Wee Sun Lee, Dinesh Manocha
Our planning system combines a POMDP algorithm with the pedestrian motion model and runs in near real time.
Robotics
1 code implementation • 17 Feb 2018 • Panpan Cai, Yuanfu Luo, David Hsu, Wee Sun Lee
Planning under uncertainty is critical for robust robot performance in uncertain, dynamic environments, but it incurs high computational cost.