no code implementations • 5 Dec 2024 • Chiyu Max Jiang, Yijing Bai, Andre Cornman, Christopher Davis, Xiukun Huang, Hong Jeon, Sakshum Kulshrestha, John Lambert, Shuangyu Li, Xuanyu Zhou, Carlos Fuertes, Chang Yuan, Mingxing Tan, Yin Zhou, Dragomir Anguelov
Realistic and interactive scene simulation is a key prerequisite for autonomous vehicle (AV) development.
no code implementations • 27 Nov 2024 • Lewen Yang, Xuanyu Zhou, Juao Fan, Xinyi Xie, Shengxin Zhu
Foundational models have the characteristics of pre-training, transfer learning, and self-supervised learning, and pre-trained models can be fine-tuned and applied to various downstream tasks.
no code implementations • 7 Apr 2023 • Kan Chen, Runzhou Ge, Hang Qiu, Rami Ai-Rfou, Charles R. Qi, Xuanyu Zhou, Zoey Yang, Scott Ettinger, Pei Sun, Zhaoqi Leng, Mustafa Baniodeh, Ivan Bogun, Weiyue Wang, Mingxing Tan, Dragomir Anguelov
To study the effect of these modular approaches, design new paradigms that mitigate these limitations, and accelerate the development of end-to-end motion forecasting models, we augment the Waymo Open Motion Dataset (WOMD) with large-scale, high-quality, diverse LiDAR data for the motion forecasting task.
1 code implementation • CVPR 2022 • Xiao Lu, Yihong Cao, Sheng Liu, Chengjiang Long, Zipei Chen, Xuanyu Zhou, Yimin Yang, Chunxia Xiao
Our proposed approach is extensively validated on the ViSha dataset and a self-annotated dataset.
no code implementations • CVPR 2022 • Xuanyu Zhou, Charles R. Qi, Yin Zhou, Dragomir Anguelov
Lidars are depth measuring sensors widely used in autonomous driving and augmented reality.
no code implementations • 29 Sep 2021 • Xuanyu Zhou, Charles R. Qi, Yin Zhou, Dragomir Anguelov
However, most prior work focus on the generic point cloud representation, neglecting the spatial patterns of the points from lidar range images.
1 code implementation • 22 Sep 2018 • Bichen Wu, Xuanyu Zhou, Sicheng Zhao, Xiangyu Yue, Kurt Keutzer
When training our new model on synthetic data using the proposed domain adaptation pipeline, we nearly double test accuracy on real-world data, from 29. 0% to 57. 4%.
Ranked #21 on
Robust 3D Semantic Segmentation
on SemanticKITTI-C