no code implementations • 1 Apr 2024 • Yang Liu, He Guan, Chuanchen Luo, Lue Fan, Junran Peng, Zhaoxiang Zhang
The advancement of real-time 3D scene reconstruction and novel view synthesis has been significantly propelled by 3D Gaussian Splatting (3DGS).
no code implementations • 31 Jan 2024 • Xu Hu, Yuxi Wang, Lue Fan, Junsong Fan, Junran Peng, Zhen Lei, Qing Li, Zhaoxiang Zhang
In this paper, we propose a novel approach to achieve object segmentation in 3D Gaussian via an interactive procedure without any training process and learned parameters.
1 code implementation • 29 Jan 2024 • Yuxue Yang, Lue Fan, Zhaoxiang Zhang
Thus, MixSup leverages massive coarse cluster-level labels to learn semantics and a few expensive box-level labels to learn accurate poses and shapes.
1 code implementation • 29 Nov 2023 • Yuqi Wang, JiaWei He, Lue Fan, Hongxin Li, Yuntao Chen, Zhaoxiang Zhang
In autonomous driving, predicting future events in advance and evaluating the foreseeable risks empowers autonomous vehicles to better plan their actions, enhancing safety and efficiency on the road.
2 code implementations • 7 Aug 2023 • Lue Fan, Feng Wang, Naiyan Wang, Zhaoxiang Zhang
Consequently, we develop a suite of components to complement the virtual voxel concept, including a virtual voxel encoder, a virtual voxel mixer, and a virtual voxel assignment strategy.
1 code implementation • 16 Jun 2023 • Yuqi Wang, Yuntao Chen, Xingyu Liao, Lue Fan, Zhaoxiang Zhang
In this work, we address this limitation by studying camera-based 3D panoptic segmentation, aiming to achieve a unified occupancy representation for camera-only 3D scene understanding.
no code implementations • 8 Jun 2023 • JiaWei He, Lue Fan, Yuqi Wang, Yuntao Chen, Zehao Huang, Naiyan Wang, Zhaoxiang Zhang
In this paper, we rethink the data association in 2D MOT and utilize the 3D object representation to separate each object in the feature space.
1 code implementation • 24 Apr 2023 • Yingyan Li, Lue Fan, Yang Liu, Zehao Huang, Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang, Tieniu Tan
In this paper, we study how to effectively leverage image modality in the emerging fully sparse architecture.
2 code implementations • ICCV 2023 • Lue Fan, Yuxue Yang, Yiming Mao, Feng Wang, Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang
Drawing inspiration from this, we propose a high-performance offline detector in a track-centric perspective instead of the conventional object-centric perspective.
2 code implementations • 5 Jan 2023 • Lue Fan, Yuxue Yang, Feng Wang, Naiyan Wang, Zhaoxiang Zhang
To enable efficient long-range detection, we first propose a fully sparse object detector termed FSD.
4 code implementations • 20 Jul 2022 • Lue Fan, Feng Wang, Naiyan Wang, Zhaoxiang Zhang
To enable efficient long-range LiDAR-based object detection, we build a fully sparse 3D object detector (FSD).
2 code implementations • CVPR 2022 • Lue Fan, Ziqi Pang, Tianyuan Zhang, Yu-Xiong Wang, Hang Zhao, Feng Wang, Naiyan Wang, Zhaoxiang Zhang
In LiDAR-based 3D object detection for autonomous driving, the ratio of the object size to input scene size is significantly smaller compared to 2D detection cases.
Ranked #3 on 3D Object Detection on waymo cyclist
1 code implementation • 18 Mar 2021 • Lue Fan, Xuan Xiong, Feng Wang, Naiyan Wang, Zhaoxiang Zhang
The most notable difference with previous works is that our method is purely based on the range view representation.
1 code implementation • ICCV 2021 • Lue Fan, Xuan Xiong, Feng Wang, Naiyan Wang, Zhaoxiang Zhang
We first analyze the existing range-view-based methods and find two issues overlooked by previous works: 1) the scale variation between nearby and far away objects; 2) the inconsistency between the 2D range image coordinates used in feature extraction and the 3D Cartesian coordinates used in output.