no code implementations • 13 Aug 2024 • Yutao Zhu, Xiaosong Jia, Xinyu Yang, Junchi Yan
The integration of data from diverse sensor modalities (e. g., camera and LiDAR) constitutes a prevalent methodology within the ambit of autonomous driving scenarios.
1 code implementation • 6 Jun 2024 • Xiaosong Jia, Zhenjie Yang, QiFeng Li, Zhiyuan Zhang, Junchi Yan
In an era marked by the rapid scaling of foundation models, autonomous driving technologies are approaching a transformative threshold where end-to-end autonomous driving (E2E-AD) emerges due to its potential of scaling up in the data-driven manner.
no code implementations • 20 Mar 2024 • Xiaosong Jia, Shaoshuai Shi, Zijun Chen, Li Jiang, Wenlong Liao, Tao He, Junchi Yan
As an essential task in autonomous driving (AD), motion prediction aims to predict the future states of surround objects for navigation.
no code implementations • 5 Mar 2024 • Han Lu, Xiaosong Jia, Yichen Xie, Wenlong Liao, Xiaokang Yang, Junchi Yan
End-to-end differentiable learning for autonomous driving (AD) has recently become a prominent paradigm.
1 code implementation • 2 Nov 2023 • Zhenjie Yang, Xiaosong Jia, Hongyang Li, Junchi Yan
Recently, large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
1 code implementation • ICCV 2023 • Xiaosong Jia, Yulu Gao, Li Chen, Junchi Yan, Patrick Langechuan Liu, Hongyang Li
We find that even equipped with a SOTA perception model, directly letting the student model learn the required inputs of the teacher model leads to poor driving performance, which comes from the large distribution gap between predicted privileged inputs and the ground-truth.
Ranked #2 on CARLA longest6 on CARLA
1 code implementation • CVPR 2023 • Xiaosong Jia, Penghao Wu, Li Chen, Jiangwei Xie, Conghui He, Junchi Yan, Hongyang Li
End-to-end autonomous driving has made impressive progress in recent years.
Ranked #4 on CARLA longest6 on CARLA
1 code implementation • 3 Jan 2023 • Penghao Wu, Li Chen, Hongyang Li, Xiaosong Jia, Junchi Yan, Yu Qiao
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving.
1 code implementation • CVPR 2023 • Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li
Oriented at this, we revisit the key components within perception and prediction, and prioritize the tasks such that all these tasks contribute to planning.
2 code implementations • 12 Sep 2022 • Hongyang Li, Chonghao Sima, Jifeng Dai, Wenhai Wang, Lewei Lu, Huijie Wang, Jia Zeng, Zhiqi Li, Jiazhi Yang, Hanming Deng, Hao Tian, Enze Xie, Jiangwei Xie, Li Chen, Tianyu Li, Yang Li, Yulu Gao, Xiaosong Jia, Si Liu, Jianping Shi, Dahua Lin, Yu Qiao
As sensor configurations get more complex, integrating multi-source information from different sensors and representing features in a unified view come of vital importance.
1 code implementation • 16 Jun 2022 • Penghao Wu, Xiaosong Jia, Li Chen, Junchi Yan, Hongyang Li, Yu Qiao
The two branches are connected so that the control branch receives corresponding guidance from the trajectory branch at each time step.
Ranked #3 on Autonomous Driving on CARLA Leaderboard
1 code implementation • 30 Apr 2022 • Xiaosong Jia, Penghao Wu, Li Chen, Yu Liu, Hongyang Li, Junchi Yan
Based on these observations, we propose Heterogeneous Driving Graph Transformer (HDGT), a backbone modelling the driving scene as a heterogeneous graph with different types of nodes and edges.
no code implementations • 4 Nov 2020 • Xiaosong Jia, Liting Sun, Masayoshi Tomizuka, Wei Zhan
We find three interpretable patterns of interactions, bringing insights for driver behavior representation, modeling and comprehension.